Showing posts with label Six Sigma ERP. Show all posts
Showing posts with label Six Sigma ERP. Show all posts

Friday, 20 March 2020

Using SQDCM Boards the Right Way for Effective Gemba Walks

Based on a request that stemmed from a discussion earlier in the year, I spent three days at a company’s plant to help them finetune their Gemba walks. Several of their floor leaders had attended a training session and approached me with questions. The training highlighted a problem they had been having since they implemented their first safety, quality, delivery, cost (sometimes inventory and/or productivity), morale (SQDCM) board about one year earlier.

Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Certification, Six Sigma Cert Exam

Figure 1: Template of an SQDCM Board

Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Certification, Six Sigma Cert Exam

Figure 2: Example of Modified SQDIP Board

Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Certification, Six Sigma Cert Exam

Figure 3: Example of Modified SQDIPM Board

Development of a Typical SQDCM Board


Their first SQDCM board described in this example (not shown in photographs) was put in the department occupying the middle of both the physical facility and process flow. They had included an accountability process* at the board as well. As the year went by, they expanded into other process areas within the facility.

The metrics that were chosen were, for better or worse, “handed down” from the corporate office located in another city. Those metrics were used at each board, with little or no modification permitted. This led to difficulty for many of the company’s leaders. One big problem: The daily Gemba walk was conducted in the order the boards were created, not in the logical flow of the process. As with so many misunderstood elements in an attempted Lean transformation, the focus was on not being red, rather than identifying and correcting problems.

What did this specifically look like? Red items on the fancy SQDCM letters were transferred down to a countermeasure (C/M) sheet, but in many instances, no actions were identified. There was even a checkbox for “no C/M needed.” (Hint: If you have a red, it should always have a corresponding countermeasure!) In addition, red situations were glazed over quickly so green situations could be highlighted.

None of this is uncommon. Too many organizations attempting to either do Lean or possibly transform simply don’t understand the concepts and theory behind the numerous tools. SQDCM isn’t simply putting those letters up on a board. Gemba walks aren’t simply management by walking around (MBWA) – popular in the ’80s and ’90s. Strategic deployment isn’t simply an X-matrix in the CEO’s office or, even worse, an annual operating plan.

When organizations truly want to benefit from Lean thinking, the leadership understands this requires a transformation – the entire business needs to change the way it operates. Most importantly, the organization needs to understand that Lean is a holistic system. This is why a tree is one of the most common visualizations of Lean. The roots, the soil, the trunk, the branches, the leaves, the sun and the rain all must work together for the tree to maximize its potential. Simply dropping a seed in the ground doesn’t guarantee success.

Tool Use of SQDCM Boards


So, what should this complete transformation look like? Let’s start with two points of basic tool use of SQDCM boards.

1. Always start at the point closest to the customer. It doesn’t matter how good your manufacturing is if you can’t get product out of shipping. Put your first board there. See how you are performing to your external customer. Then put the boards in place moving backward. That might be warehouse, packing, painting, assembly – whatever the process is that delivers to shipping. Don’t wait months between boards. Push for one every week or two. End with boards in scheduling, purchasing and maintenance. If appropriate, get boards to HR, finance, sales, customer service and R&D too. Just remember – start closest to the customer and work your way backward.

2. Determine the right metrics for the boards. Safety is always first. The metrics are presented in the name in order of priority (safety, quality, delivery, cost [sometimes inventory and/or productivity], morale – although I often move morale ahead of safety, in keeping with Toyota’s respect-for-people mantra). The metrics at the floor need to relate to the floor but also need to tie to company goals.

Floor-Specific Goals


Boards that have the same percentage metric or labor/hour metric across departments aren’t scaled to reflect the specific areas. (I’ve never seen shipping, assembly and machining have identical outcomes.) Pay close attention to the behavior the metric drives. For example, with OSHA total reportable incidence rate (TRIR), it either becomes an impossible goal due to injuries, or employees hide injuries to not affect the goal. Instead, try measuring something like “more than five safety opportunities identified each week.” This is a proactive goal. In theory, the more opportunities identified, the fewer actual injuries occur. Thus, the floor metric is supporting the corporate metric of reduced recordable injuries.

It is important to note that not only do these goals need to be floor-specific (as opposed to company-specific), but they also need to be customer-driven. Each department’s customer is the next department in line. Is the machining department delivering product at spec, on time to assembly? Is shipping delivering to the delivery company in the same way? If the organization can deliver on time internally, corporate delivery performance will be improved. Likewise, each department should be managing inventory. Be careful not to simply manage dollars or pieces. Rather, the inventory goal should be reflected by inventory turns and stock-outs. These two metrics drive each department to control their inventory level and strive to have the right inventory on hand. This helps control cost at the corporate level.

Connection with Strategic Deployment


Now, tie this to strategic deployment (SD). As noted above, the plant SQDCM should connect to the corporate goals. This means there should be deployment of those SQDCM goals from top to bottom.

Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Certification, Six Sigma Cert Exam

Figure 4: Connect the Plant SQDCM to the Overall SD

When the X-matrix is created, it should be a three- to five-year plan, with annual achievements. The targets to improve should be related to the stretch goals of the organization (or facility). The plant referred to in the beginning of the article faltered here. They simply took the corporate deployment goals (more like annual operating plan goals) and rolled them into department SQDCM boards. This made the measurement cumbersome, difficult to relate to employees and stifled discussion during the Gemba walks.

Strategic Deployment Is Critical


Early in my career, we started with SQDCM boards without SD. I try not to do this now. For our first 18 months, we struggled with the process. When we initiated SD, it seemed to tie everything together for our managers. It didn’t affect the direct labor employees simply because our initial SQDCM was floor-specific (because there was no strategic deployment). Note: Initial attempts at this process can be time-consuming for an organization. However, the planning becomes both easier and faster with experience.

Here are a couple other tricks for the SD process.

1. Start the process in October. This ensures the organization is ready by January. Remember, a strong SD process has catchball – the two-way process of goal discussion going from executive offices to shop floor.

2. Build a Gantt chart into the SD process. Many first- and second-year SD processes forget about timelines. Plants expect to hit goals in January when the projects aren’t even active until April.

As shown in Figure 5, the goal of the SQDCM board isn’t the pretty green or red letters at the top, but rather the importance of driving problem solving. The focus should be on cause identification and solution implementation, not: are we red or green?

Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Certification, Six Sigma Cert Exam

Figure 5: Root Cause Identification and Problem Solutions

Gemba Walk


Next to the Gemba walk! There are many, many ways to conduct the walk process. There can be different tiers, different frequencies of walk and different expectations of leaders. This describes my way. To start, the highest level in the facility should be on the walk daily (what is often identified as the senior staff). Each department in the facility should also be represented (this is often beyond the senior staff). Strong walks are supported by leaders having team huddles to start the shift, before the walk. These are leaders and their hourly employees.

The walk should start closest to the customer, just like board implementation. The walk works back until it ends at receiving (or sometimes sales/customer service/R&D). Green is acknowledged, but red is discussed. What was the problem, what caused it, what temporary countermeasure was used, what is the next step for a long-term C/M? The temporary C/M is often where teams stop in the process (if they determine anything at all). This displays a lack of Lean understanding.

The challenge is to determine the long-term C/M. It is important to now note that the most critical part of the SQDCM board is not the pretty green or red letters, but rather the countermeasure sheet. This shows the actions to get the department (or facility or company) back to green. In addition, the lower information on the board often reflects trends over time. This, too, can be critical, to ensure departments aren’t chasing squirrels and missing the bigger picture. The focus should always be on the ideal state (defined as 1×1, on-demand, on-time, with perfect quality, safely created and delivered, at the best cost).

Source: isixsigma.com

Friday, 3 January 2020

It's Not Common Sense…It's a Sixth (Sigma) Sense

Many times when Lean and Six Sigma are introduced to an executive management team, there will be an individual who makes the statement: “This is just common sense. Why do we need to go through all this methodology, training and the statistics stuff to execute a simple project?”

A large segment of thought leaders in corporate America believe in the “just do it” approach to change. To them, the answers to process improvement needs are obvious. They think if everyone were as bright and motivated as they, these projects would get done … and the projects would get done on time and under budget. Of course, a Six Sigma practitioner would say, if the solution is known, then by all means implement it. The discipline of Lean and Six Sigma should be utilized on issues where the solution is not known.

While Six Sigma often gets a bad rap of “slowing down project implementation,” much of that is based on the up-front effort in properly defining the problem and collecting the appropriate data to determine the root cause of a problem. Once individual leaders become familiar with the concept of root cause, then they are willing to jump on the bandwagon and admit that common sense alone might not have allowed them to discover the solution to a customer requirement or systematically find the unknown cause to the process problem.

Real Common Sense Has a Role in Six Sigma Projects


Yet on the flip side, there are Six Sigma practitioners who downplay the role of common sense and hold on to tools alone – at the expense of the valuable insights that years of hands-on experience can bring. Common sense in the context of historical knowledge of a particular business – including a grasp of best practices and insights on how to make things happen in that business culture – is an invaluable ally of the Six Sigma disciplines needed for effective change. It is when common sense is the code word for “just do it” that problems can occur and warning flags should be seen.

In many instances, the just-do-it mentality can be hidden within the argument that a project methodology already exists. At any thriving organization, change has been going on for a long time, either formally or informally. If the best aspects of Lean and Six Sigma are to be leveraged to improve change dynamics, then it is best to introduce them as a way to augment the current project methodology and not as something to replace it. Building on an organization’s change history makes Lean and Six Sigma more acceptable to change agents by emphasizing and celebrating past success rather than tearing down and starting over.

This approach plays well in companies where the leadership groups sees Six Sigma as overly complicated and as taking steps back before being able to move forward. A common sense approach to implementing Six Sigma concentrates on the aspects of Six Sigma that move the company forward, rather than engaging in a philosophical debate of Six Sigma versus current project approaches.

Six Sigma Principles That Are Key to Successful Project Execution


The principles of Lean and Six Sigma most important to emphasize when integrating Six Sigma into an organization’s existing project methodology or philosophy are:

◉ In Six Sigma, the voice of the customer defines quality in terms of meeting customer expectations. Traditional project methodology usually concentrates on the quality and speed of the project implementation, itself, and not of the measurable customer experience.

◉ In a Six Sigma program, key leadership and P&L owners are trained and actively engaged in the process, with the CEO playing a key role in company-wide initiatives. When initiatives are scoped within only a single line of business, the key manager or Champion in that business unit plays the leadership role.

◉ A Six Sigma approach to change provides defined organizational roles (Green Belts, Black Belts, Master Black Belts, Champions, sponsors, etc.) to create accountability. Many companies choose to not use Six Sigma terminology when describing these roles. However, the clear definition of roles helps promote project success in a Six Sigma environment.

◉ Six Sigma promotes a critical mass of dedicated resources deployed to ensure success and help define capacity for change. Many times in a traditional project or change environment, management expects projects to be done in addition to the day-to-day job. While not all participants need be dedicated on a full-time basis, the critical roles should be full-time resources who continually move process improvement forward.

◉ Six Sigma uses a value-based project selection process and a rigorous system of projects-in-process management. So choosing projects to execute across silos in an organization allows for change to occur within processes impacting customers and not within discrete business units with little or unknown customer impact.

◉ The DMAIC (Define, Measure, Analyze, Improve, Control) methodology has, under its umbrella, both Lean and Six Sigma tools that can simultaneously focus on speed and quality. This integration of Lean tools speeds up processes by removing waste and non-value-added process complexity. These tools are additional ones in the arsenal of traditional project tools that make change happen.

Project Management Principles Key to Six Sigma Implementation


There also are aspects of an existing project methodology and change culture historically found within an organization that are critical for the successful integration of Six Sigma principles.

The traditional project management approach to change leverages an infrastructure to plan, manage and control the change initiative. Typically, a company will utilize a project management office that acts as the central point for all information and tracking of critical initiatives. Likewise, such a mechanism is essential for tracking the progress of all Six Sigma projects in regard to meeting deadlines and staying within budget. The project management office also is a good place to track all critical-to-quality metrics and expected financial benefits of Six Sigma projects.

A traditional project approach includes tools, templates and methodologies to ensure implementation success. These will include work plans, issues lists, task action records and all documentation related to managing the risk and communications around a project or wave of projects. Many of these traditional project management tools fit nicely into the “tollgating” process required between each phase of the DMAIC methodology.

Emphasis in a traditional project approach is on the project management process itself and implementation quality. This goes hand in hand with the Six Sigma approach that the end game of the project is to address speed and quality of the “operational process,” which, in turn, favorably impacts the customer experience and the bottom-line of the organization. So, a common sense approach indicates that Lean/Six Sigma and traditional project management are not either-or propositions, but critical elements in a successful change management approach. Of course, the two are not operating totally distinct from one another, but in a fashion where the sum of the two are greater than each by itself.

Six Sigma’s influence on a traditional approach to change management is that it helps shift the focus more directly on the customer experience and toward data-driven process improvement. The examples in the figures below show the effects of Six Sigma being introduced into a traditional merger methodology, a traditional mapping approach and a training and communications plan.

Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Online Exam, Six Sigma Prep Guides, Six Sigma Study Materials

Figure 1: Merger Approach

Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Online Exam, Six Sigma Prep Guides, Six Sigma Study Materials

Figure 2: Process Mapping

Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Online Exam, Six Sigma Prep Guides, Six Sigma Study Materials

Figure 3: Training and Communications

Monday, 2 September 2019

Improving Process Turnaround Time in an Outpatient Clinic

Historically, medical residency teaching clinics provide the heart of medical services to an under-insured population through various government-subsidized health insurance programs. These programs provide medical coverage for eligible individuals with incomes of less than 35 percent of the federal poverty level. It is well-established that people living in poverty are at a higher risk for chronic diseases, such as hypertension, diabetes, dyslipidemia, obesity and psychiatric disorders. Low-income population groups may also experience barriers to receiving healthcare services, such as lack of transportation.

Medical educational residency clinics are challenged to provide accessible, ongoing, quality care while being sensitive to the special needs of the population group they serve. They also must maintain the goal to train new physicians in a fiscally viable manner. Nationally, residency teaching clinics have inefficiencies that cause long patient wait times. Patient wait time for medical care has a direct impact on patient satisfaction, medical compliance, return show rate and patient attitudes toward clinicians, staff and clinics in general.

The Family Ambulatory Health Center (FAHC), located on the main campus of Hurley Medical Center, a public, non-profit teaching medical center in Flint, Mich., consistently scored low in patient wait times on patient satisfaction surveys. Patient wait times to see an internal medicine resident physician in the Hurley FAHC exceeded the patient threshold, causing dissatisfaction, poor medical compliance and high no-show rates. Dissatisfied patients created a domino effect, leading to dissatisfied resident physicians and clinic staff. The resident physicians became frustrated by not being able to manage clinic patients with chronic diseases effectively due to high no-show rates for follow-up appointments. To combat this problem, a Black Belt at the Hurley FAHC began a Six Sigma project.

Define


In January 2010, the Hurley FAHC, in collaboration with North Shore-LIJ Health System, implemented a process improvement project to reduce patient turnaround time (TAT) and improve quality in the internal medicine residency clinic. The project was sponsored by the clinic manager, who monitored the patient satisfaction surveys. The Hurley project was employed to determine clinic inefficiencies and to improve the patient flow process. The premise of this project was that decreased patient wait-times in the internal medicine clinic would increase overall patient satisfaction. The Black Belt leading the project formed a team consisting of a registered nurse, a licensed practical nurse, clerical staff, a nurse practitioner, resident physicians and faculty/clinic physicians. It was important to ensure that the core team and extended members included individuals that have direct contact with the process.

During the Define stage, the team developed a high-level process map to help understand the larger process and to gain consensus for the overall scope of the project (Figure 1).

Six Sigma Tutorials and Materials, Six Sigma Certifications, Six Sigma Learning, Six Sigma Online Exam

Figure 1: High-level Process Map of a Patient Clinic Visit

The process starts when the patient checks in at the registration desk and ends when the patient checks out at the end of the clinic visit. The initial data reflected the total patient visit TAT to be an average of 115 minutes from beginning to end. This process had an upper specification limit (USL) of 60 minutes, which was decided upon by the project sponsor.

Measure


The goal in the Measure phase was to determine a baseline metric of the identified overall Y (TAT) from start to finish. The process map was used to identify each step, and a data collection tool was created to capture a metric for the designated incremental steps within the clinic. The team decided to measure the time required to complete the following process increments:

1. Patient checks into the clinic and clerical staff takes chart to holding queue

2. Clinical staff brings patient to exam room and puts chart in resident holding queue

3. Patient waits in exam for resident physician

4. Resident physician sends patient to check out

5. Patient waits to be checked out of clinic

Baseline data for the patient TAT in the adult medical clinic was collected from Mar. 8 to Mar. 26, 2010, during each adult medical clinic session in that time frame. The results are represented by the process capability graph in Figure 2.

Six Sigma Tutorials and Materials, Six Sigma Certifications, Six Sigma Learning, Six Sigma Online Exam

Figure 2: Process Capability of Turnaround Time

The sample N was 362 patient visits, which reflected an average TAT of 115 minutes, with a standard deviation of 32.5 minutes. The USL of 60 minutes translated to a baseline defect per million opportunities (DPMO) rate of 953,039 and a corresponding sigma score of -2. Based on these metrics, the Black Belt determined that the process met the customer expectation of a 60-minute turnaround time 4.5 percent of the time.

In keeping with Six Sigma strategy, the team held a brainstorming session with frontline employees to delineate the perceived causes of delay and to get a broad employee prospective of alleged problems in the patient flow process. The perceived causes are represented in the Figure 3.

Six Sigma Tutorials and Materials, Six Sigma Certifications, Six Sigma Learning, Six Sigma Online Exam

Figure 3: Causes Affecting Turnaround Time

Analyze


The goal in this phase was to determine which of the identified causes (x’s) of delays in patient TAT had the greatest impact on the total process. This was done by bringing the core group together again to perform a failure mode and effect analysis (FMEA).

Each step of the process was identified and reviewed to determine its failure potential (on a scale of 1 to 10) based on severity, frequency of occurrence and current detection methods. These values were then multiplied to obtain a risk priority number (RPN). The highest values are listed in Table 1.

Table 1: FMEA of Patient Visit Process

Process Step Potential Failure  Failure Effects  Sev  Causes  Occ  Current Controls  Det  RPN 
Patient into room No room Patient has to wait 10 Too many doctors, too many patients, attending Dr. not available, wait for procedure 8 Assign Dr.’s rooms 1 80
Interruptions   Delay getting patient to room  Prescriptions, phone, paper work, page doctors  Locked unit doors  56 
Resident reviews chart  Chart not complete; Resident not on time  Look up results; Can’t see patient  10, 7 Can’t find test results, missing consult letters; Rounding  8, 7  No; Schedule  2, 3  160, 147 
Revisit  Paper work  More time  10   Scripts, forms, referrals, rechecking chart  10  No  100 

Further analysis was performed to discern the vital x’s using hypothesis testing to determine statistical significance. The null hypothesis – that there were no statistical differences in TAT between each process step – was rejected because the p-value 0.000 was less than 0.05, indicating that a statistical difference was found between the steps.

Due to the results obtained on the FMEA, the team performed additional analysis on the resident physician and clinician piece of the process. This data reflected that 42 percent and 40 percent of patients’ time was spent with a resident and clinician, respectively.

There also was a statistically significant difference found in the median TAT (p-value of 0.046) between the charts that were reviewed by a resident physician versus charts that were not reviewed by resident physician prior to their clinic time.

Data analysis also showed that the more-experienced resident physicians (PGY 3) had lower median TAT scores with less variation than newer (less experienced) resident physicians (PGY 1), with a significant p-value of 0.000.

Six Sigma Tutorials and Materials, Six Sigma Certifications, Six Sigma Learning, Six Sigma Online Exam

Figure 4: Boxplot of Overall TAT vs. Experience Level

In summary, the team determined that the most significant inefficiencies occurred during two time periods:

1. The time patients spent waiting in the lobby before being taken to the exam room
2. The total time patients spent in the exam room waiting for a resident physician to examine the patient, consult with a faculty physician and close the patient visit.

Improve


The team performed another brainstorming session with representatives from each area where inefficiencies occurred to find solutions that could have a positive impact on patient flow.

The first area of concern was patient wait time in the waiting room. The team found an inadequacy in the check-in process and came up with an easy solution that could be implemented without disrupting staffing boundaries or violating union contracts. This was done by eliminating a step in the check-in process. Prior to the Six Sigma project, patients would sign in and the medical assistants would process their charts. The charts would then need to be carried to the back clinical area to the holding queue for the back-clinic medical assistants to bring patients to examination rooms. The charts were being held in the front check-in area for various extraneous reasons. For instance, the front staff was being interrupted by phone calls, patient walk-ins and other miscellaneous duties.

These disruptions caused a delay in the charts being brought to the back clinical area holding queue. This problem was corrected by keeping a medical assistant in the check-in area to “arrive” patients and attend to all other duties. The check-in clerk was moved to the back clinic area near the holding queue. Under this arrangement, the charts could now be prepped promptly, without interruptions, and put into the holding queue for back-clinic staff to bring patients to the examination rooms.

There also was a delay in the time it took the medical assistant to bring the patient back to the exam room once the patient was registered in the system. In most cases, this delay was directly related to the lack of available exam rooms, which was due to the time resident physicians spent examining preceding patients. It is to be noted that each resident is assigned two exam rooms. The team understood that the rooms needed to be “turned” faster while maintaining quality care. One solution to this problem was an order board posted in a common area to prevent delays in patients waiting for common office procedures such as injections.

Solutions to resident physician-related matters included having resident physicians review patient charts before their clinic day started and to utilize electronic prescription services. As a result, faculty physicians also became more accessible to resident physicians, and resident physicians were educated on how to present a case to faculty in a more concise manner. The schedules for PGY1 resident physicians were adjusted from 15-minute blocks to 30-minute blocks until they became fully oriented to the outpatient clinic.

The team piloted the improvements with a small group of residents. The solutions that they implemented had a positive impact on the process Y by reducing the TAT from the original baseline average of 115 minutes to 94 minutes (an 18 percent reduction), with a corresponding decrease in the variation for the same pilot of residents (Figure 5).

Six Sigma Tutorials and Materials, Six Sigma Certifications, Six Sigma Learning, Six Sigma Online Exam

Figure 5: Boxplot TAT Change for Pilot Group

Control


To control and sustain these improvements, the team utilized an individual and moving-range (I-MR) control chart. Monitoring the TAT helped to ensure that the process stayed in control, was stable and met the customer’s expectations.

Six Sigma Tutorials and Materials, Six Sigma Certifications, Six Sigma Learning, Six Sigma Online Exam

Figure 6: I-MR Chart of TAT Change for Pilot Group

The team developed a control plan to measure total patient TAT continually and vital x’s for up to one year after the project introduction (Table 2). The management of the Control phase is delegated to a process owner, who is responsible for the day-to-day monitoring and measuring of the process. In this case, the process owner is the registered nurse, who completed Green Belt training and has been directly involved with the project since its inception. The Control phase guarantees continued accountability and allows time for old habits to be dropped and improved habits to be ingrained in the culture of the clinic – from the secretary who checks the patient in to the resident physician.

Table 2: Control Plan for Monitoring Patient TAT


Description Data Type Measurement Method Control/Monitor Frequency Alert Flags Action Responsibility
Y Patient check-in to check-out Continuous Manual measurement tool Control chart Weekly Variation in std dev Review with clinic director Process owner: Melissa Bachman
X Patient arrived to roomed Continuous Cerner   Control chart Weekly Variation in std dev Review with staff Process owner: Melissa Bachman
X Patient arrived to roomed Continuous Cerner/Manual tool Control chart Weekly  Variation in std dev Review with staff  Process owner: Melissa Bachman

Notable Project Considerations


This project clearly demonstrates the Six Sigma methodology as an effective tool in defining inefficiencies and improving patient flow in a residency outpatient clinic. Six Sigma uses hard data to drive changes rather than notions based on individual perceptions, assumptions and agendas.

Although the team didn’t meet the USL of 60 minutes, set by the project sponsor, the FAHC Internal Medicine Clinic has consistently reduced total patient TAT from 115 minutes to 94 minutes. It should be noted that the baseline data was collected during the time the clinic was functioning optimally. The initial data was collected in March, when the resident physicians were performing at high levels. The PGY 2 and 3 resident physicians were very efficient, due to having two and three years of experience, respectively. The PYG 1 resident physicians were just learning the nuances of primary care or getting a handle on patient management in the outpatient setting. March also is a time when few staff members are on vacation, so the clinic was adequately staffed with seasoned RNs, LPNs, medical assistants and medical secretaries.

The post-solution measurement phase occurred during the most hectic time for an outpatient residency clinic; it coincided with the graduation of PGY 3 resident physicians. PGY3 residents were focused on terminating patient relationships, writing clinic summaries and tying up loose ends. As PGY 3 resident physicians are graduating, PGY 1 resident physicians are entering the program and going through orientation. Given the challenges during this time, the team worked extremely well in keeping the process intact.

In retrospect, the team should consider whether the USL of 60 minutes set by the project sponsor was a reasonable and achievable goal for an outpatient residency clinic. In 2006, the national average for time that a seasoned physician spent with a patient was close to 22 minutes. Given the complexity of care that the FAHC patients require, it should be expected that resident physicians would spend more time with patients than their seasoned clinician cohorts.

The goal of this project was to enhance the physician training process by increasing clinic proficiency while maintaining quality patient care. We succeeded in cutting back patient wait-times in certain key steps of the process. The project decreased the amount of time patients wait in the lobby (main waiting area) by 38 percent. In keeping with recommendations set by the Institute of Medicine, our patients wait an average of 18 minutes in the lobby after they check in to the clinic. This group recommends that 90 percent of scheduled patients should be seen within 30 minutes of their scheduled appointment. The project work decreased the amount of time the patient waits for medical services while maintaining the amount of time the resident physician spends with the patient.

In conclusion, the FAHC Internal Medicine Clinic reduced patient total TAT by 18 percent without compromising patient care. We expect to see improved patient satisfaction, improved resident physician satisfaction and improved continuity of care for our clinic patients with the improved process flow implemented through this Six Sigma project.

Monday, 13 May 2019

Seeking Reliable Six Sigma Project Data with Eight CTQs

While Micropump, which manufactures small-volume, precision-flow, sealed pumps, has been in business for 46 years and has been owned by IDEX Corporation for 11 years, it is in the last several years that innovations in its product line have accelerated rapidly. The reason is because now there is more cash to finance research and development efforts. Interestingly, the cash is not from the benevolence of the parent company, but from the results of the Operational Excellence initiative that IDEX management began five years ago.

Six Sigma Certifications, Six Sigma Guides, Six Sigma Tutorials and Materials, Six Sigma Study Materials

In 2001, Micropump was in need of revitalization. Market growth was lagging and operations were not improving rapidly. IDEX turned to Six Sigma and Lean (calling the program Operational Excellence) to provide the framework and philosophy to move Micropump forward. The IDEX guideline was to choose the best and brightest and dedicate them to full-time Black Belt roles. Micropump did just that. With less than 75 salaried employees on staff in Vancouver, the company could scarcely afford to dedicate two of its best employees full-time to Lean Six Sigma, but it did. It was a leap of faith because of the cost of adding two positions to the company’s head count. And it meant the Black Belts had to deliver.

But having the right people working on the right projects did not move the improvements ahead fast enough. President Jeff Hohman described a moment of frustration early in the Six Sigma effort. “I had these two high-powered and highly compensated engineers – really two of my best people – spending hours scrubbing data and building databases to get the information they needed for their projects.” Micropump had to deliver benefits from the Lean Six Sigma effort quickly, and it could not do it if the Black Belts were spending days at a time cleaning up data rather than analyzing it and acting on it.

The Quest for Clean Data


The plant had been using statistical process control (SPC) since the mid-1980s and was an early adaptor of SPC software. The SPC software in use when Six Sigma was launched provided data about machining, molding and product testing processes in the shop, but extracting information to fuel the Six Sigma initiative was time-consuming. The company’s vice president of engineering, and one of the first Black Belts, Charlie Carr, described it this way: “Who knew how many ways there was for an operator to enter their name?”

With no way to limit the types of entries users were making, data errors like the inconsistent operator name were commonplace. Out-of control reasons could be entered free-form, making root cause analysis a painstaking effort. And finally, all data had to be moved into another application (even if no scrubbing was required) so that it could be imported into Minitab for further analysis. This meant data had to be handled multiple times before it was even ready for analysis. The process was just too expensive and error prone.

The first projects were focused on on-time delivery for customers. But on-time delivery data was not readily available. It had already become apparent that a new data acquisition method would be needed and that it would need to work with transactional processes as well as manufacturing processes.

Satisfied that SPC was a tool it needed to continue to use, at least in manufacturing, the company formed a project team to solve the data problem. There was a desire to know if one comprehensive solution could be applied across the business – in both manufacturing and transactions.

The team began by developing a list of critical-to-quality characteristics (CTQs) for process data:

◈ SPC must be used for process control in manufacturing – The company needed the ability to automate data collection and real-time alarms in all of manufacturing processes. The goal was to use existing quality data collection processes wherever possible. But the company wanted better support for automatic gaging, and more transparent data sharing. And process owners needed to be able to respond instantly to process shifts or special cause variation.

◈ The ability to accurately track transactional process performance – Team wanted to track manufacturing and transactional data at the same time, with the same system. While there clearly are differences between transactional and manufacturing data, there also are many similarities.

◈ A way to link information from many databases for use in operations – The company already had a lot of data in various databases. It needed a way to bridge these disparate systems.

◈ One source for process and product data – Once again, regardless of the source of the data, (dimensional, equipment performance, cycle times, defects, product testing), the company needed a way to reach it.

◈ Mistake-proofing of data – The vision was using current technology to eliminate operator data input errors. The company wanted the ability to use barcode scanning, pre-filled data fields, drop-down lists, etc.

◈ Real-time information about all processes – With a taste of how real-time data could help certain operations, the team figured it should span all processes.

◈ Ease of use by operators, supervisors, engineers and Black Belts – The idea was to get rid of a system that was cumbersome and difficult to use, thus making life better for everyone on the staff. Ease of use included having a system compatible with the statistical analysis software used by the company.

◈ Limited resources required for initial set-up and ongoing system maintenance – Finally, the team knew the company needed a system that required minimal on-going IT support and resources. The company was stretched too thin to place more demands on the IT staff.

The team then began considering options to meet company needs. Here are a few of the options considered:

◈ Hiring a programmer to create an application that could share information between current scheduling, SPC, engineering drawing and specification databases.
◈ Dedicating a portion of a Black Belt resource to data integrity.
◈ Investing in an enterprise wide knowledge-management system.
◈ Investigating the capabilities of different SPC software packages.

The Analysis of Options


Hiring a programmer to create a custom application held some appeal because the team thought the company’s needs were pretty specialized, and because it wanted control of the solution. Initially it seemed like it might be more cost-effective, but as when the company’s core competencies and head count were fully considered, it was realized that the business of writing custom software was the best solution. Also, the team was aware of the typical high failure rates for IT projects. The clincher, however, the problem of maintaining a home-grown system. Team members had all seen clever home-grown software solutions implemented, only to see those systems hamper the organization as they became out-dated and unsupportable.

Dedicating a portion of a Black Belt’s time to data integrity seemed crazy to the team. While having clean, reliable data was essential to driving Six Sigma projects, the act of getting that data added absolutely no value to the business. So dedicating highly valued resources to a non-value-added activity was counter-intuitive. The team considered investing in training and developing people, other than Black Belts, to harvest the needed data, but the investment costs in lead time and training resources were considerable. And the bottom line was they still needed some kind of software and hardware to do the job.

As the team investigated the field of enterprise-wide knowledge management systems, it found some great summary reporting tools, but they all lacked several key capabilities that were essential to Micropump’s business. First, a large portion of company efforts were focused on manufacturing, while knowledge management systems were not. Coupled with that was a serious weakness in real-time statistical analysis capabilities. While most of these tools could tell when something missed a target, they could not identify a statistical shift in mean or a statistical trend in real-time. Nor did they readily interface to the company’s statistical analysis software. Finally, they did not help scrub the data. In that way, they really did not move the company beyond where it already was – spending countless hours massaging and scrubbing data for projects.

The last choice, commercially available statistical process control solutions, proved to be the best course of action for the company. The team learned that while these systems have a reputation for belonging on the shop floor, the good ones do that and more. In short, the company was able to find a system that met all of its criteria. That system has been deployed now for nearly four years, and it continues to provide clean, reliable data in real time so that continuous improvement can be deployed across Micropump. The company’s precious Belt resources now spend their time doing the work of continuous improvement instead of cleaning data or being a shadow IT department.

The Proof Is in the Results


The entire DMAIC framework depends on the availability of reliable, quality data. In many companies, no provisions are made to ensure that key process information will be available when needed. The scenario experienced at Micropump early in its deployment is all too common – Black Belts designing redundant databases to capture process information for their projects, which stop when the project is closed. But now at Micropump, availability of data is a real priority. Micropump has been recognized as a leader within IDEX for its access to data for ongoing process control, for process analysis and to maintain the gains of its improvement efforts. Now other business units within IDEX are following Micropump’s lead and quickly gaining momentum.

During the last five years, the Operational Excellence program has yielded outstanding results at Micropump. The company integrated Lean tools with Six Sigma in 2003 resulting in a 30 percent improvement in on-time delivery while cutting inventory by more than half. The company is now extending the reach of Operational Excellence into the supply chain and to customers. Looking at the entire value stream has yielded even greater opportunity for improvement.

A significant impact on the company’s Six Sigma project cycle time has been noted as well. In the Define phase, projects are being scoped, prioritized and chartered faster than ever. Time required for the Measure phase has been reduced by an average of 10 percent, and implementing the Control phase is almost painless. The company is completing more projects and yielding benefits faster. Associates working on Operational Excellence project teams are seeing the results as well, and the momentum of the program has not slowed – it actually has accelerated.

Can the focus on building reliable data systems be given all the credit for Micropump’s success? Probably not. But the management team recognized the need to build a data infrastructure, and certainly deserves credit for enabling Six Sigma and Lean management.

Friday, 26 April 2019

Understanding Statistical Distributions for Six Sigma

Many consultants remember the hypothesis testing roadmap, which was a great template for deciding what type of test to perform. However, think about the type of data one gets. What if there is only summarized data? How can that data be used to make conclusions? Having the raw data is the best case scenario, but if it is not available, there are still tests that can be performed.

In order to not only look at data, but also interpret it, consultants need to understand distributions. This article discusses how to:

◈ Understand different types of statistical distributions.
◈ Understand the uses of different distributions.
◈ Make assumptions given a known distribution.

Six Sigma Green Belts receive training focused on shape, center and spread. The concept of shape, however, is limited to just the normal distribution for continuous data. This article will expand upon the notion of shape, described by the distribution (for both the population and sample).

Getting Back to the Basics


With probability, statements are made about the chances that certain outcomes will occur, based on an assumed model. With statistics, observed data is used to determine a model that describes this data. This model relates to the distribution of the data. Statistics moves from the sample to the population while probability moves from the population to the sample.

Inferential statistics is the science of describing population parameters based on sample data. Inferential statistics can be used to:

◈ Establish a process capability (determine defects per million).
◈ Utilize distributions to estimate the probability of a variable occurring given known parameters.

Inferential statistics are based on a normal distribution.


Figure 1: Normal Curve and Probability Areas

Normal curve distribution can be expanded on to learn about other distributions. The appropriate distribution can be assigned based on an understanding of the process being studied in conjunction with the type of data being collected and the dispersion or shape of the distribution. It can assist with determining the best analysis to perform. 

Types of Distributions


Distributions are classified in the same ways as data is classified – continuous and discrete: 

◈ Continuous probability distributions are probabilities associated with random variables that are able to assume any of an infinite number of values along an interval.
◈ Discrete probability distributions are listings of all possible outcomes of an experiment, along with their respective probabilities of occurrence. 

Distribution Descriptions


Probability mass function (pmf) – For discrete variables, the pmf is the probability that a variate takes the value x.

Probability density function (pdf) – For continuous variables, the pdf is the probability that a variate assumes the value x, expressed in terms of an integral between two points. 

In the continuous sense, one cannot give a probability of a specific x on a continuum – it will be some specific (and small) range. For additional insight, think of x + Dx where Dx is small. 

The notation for the pdf is f(x). For discrete distributions: 

f(x) = P(X = x)

Some refer to this as the probability mass function, since it is evaluating the probability upon that one discrete mass. For continuous distributions, one mass cannot be established. 

Cumulative density function (cdf) – The probability that a variable takes a value less than or equal to x.


Figure 2: Normal Distribution Cdf

Cdf progresses to a value of 1 because there cannot be a probability greater than 1. Once again, cdf is F(x) = P(X <  x).This holds for both continuous and discrete. 

Parameters


Parameter is a population description. Consultants rely on parameters to characterize the distributions. There are three parameters: 

◈ Location parameter – the lower or midpoint (as prescribed by the distribution) of the range of the variate (think of the mean)
◈ Scale parameter – determines the scale of measurement for x (magnitude of the x-axis scale) (think of the standard deviation)
◈ Shape parameter – defines the pdf shape within a family of shapes

Not all distributions have all the parameters. For example, the normal distribution parameters have just the mean and standard deviation. Just those two need to be known to describe a normal population. 

Summary of Distributions


The remaining portion of this article will summarize the various shapes, basic assumptions and uses of distributions. Keep in mind that there is a different pdf and different distribution parameters associated with each. 

Normal Distribution (Gaussian Distribution)



Figure 3: Normal Distribution Shape

Basic assumptions: 

◈ Symmetrical distribution about the mean (bell-shaped curve)
◈ Commonly used in inferential statistics
◈ Family of distributions characterized is by m and s

Uses include: 

◈ Probabilistic assessments of distribution of time between independent events occurring at a constant rate
◈ Mean is the inverse of the Poisson distribution
◈ Shape can be used to describe failure rates that are constant as a function of usage

Exponential Distribution



Figure 4:Exponential Distribution Shape

Basic assumptions: 

◈ Family of distributions characterized by its m
◈ Distribution of time between independent events occurring at a constant rate
◈ Mean is the inverse of the Poisson distribution
◈ Shape can be used to describe failure rates that are constant as a function of usage

Uses include probabilistic assessments of: 

◈ Mean time between failure (MTBF)
◈ Arrival times
◈ Time, distance or space between occurrences of the events of interest
◈ Queuing or wait-line theories

Lognormal Distribution



Figure 5: Lognormal Distribution Shape

Basic assumptions:

Asymmetrical and positively skewed distribution that is constrained by zero.

◈ Distribution can exhibit many pdf shapes
◈ Describes data that has a large range of values
◈ Can be characterized by m and s

Uses include simulations of: 

◈ Distribution of wealth
◈ Machine downtimes
◈ Duration of time
◈ Phenomenon that has a positive skew (tails to the right)

Weibull Distribution



Figure 6: Weibull Distribution Pdf

Basic assumptions: 

◈ Family of distributions
◈ Can be used to describe many types of data
◈ Fits many common distributions (normal, exponential and lognormal)
◈ The differing factors are the scale and shape parameters

Uses include: 

◈ Lifetime distributions
◈ Reliability applications
◈ Failure probabilities that vary over time
◈ Can describe burn-in, random, and wear-out phases of a life cycle (bathtub curve)

Binomial Distribution



Figure 7: Binomial Distribution Shape

Basic assumptions: 

◈ Discrete distribution
◈ Number of trials are fixed in advance
◈ Just two outcomes for each trial
◈ Trials are independent
◈ All trials have the same probability of occurrence

Uses include: 

◈ Estimating the probabilities of an outcome in any set of success or failure trials
◈ Sampling for attributes (acceptance sampling)
◈ Number of defective items in a batch size of n
◈ Number of items in a batch
◈ Number of items demanded from an inventory

Geometric



Figure 8: Geometric Distribution Pdf

Basic assumptions: 

◈ Discrete distribution
◈ Just two outcomes for each trial
◈ Trials are independent
◈ All trials have the same probability of occurrence
◈ Waiting time until the first occurrence

Uses include: 

◈ Number of failures before the first success in a sequence of trials with probability of success p for each trial
◈ Number of items inspected before finding the first defective item – for example, the number of interviews performed before finding the first acceptable candidate 

Negative Binomial



Figure 9: Negative Binomial Distribution Pdf

Basic assumptions: 

◈ Discrete distribution
◈ Predetermined number of occurrences – s
◈ Just two outcomes for each trial
◈ Trials are independent
◈ All trials have the same probability of occurrence

Uses include: 

◈ Number of failures before the sth success in a sequence of trials with probability of success p for each trial
◈ Number of good items inspected before finding the sth defective item

Poisson Distribution



Figure 10: Poisson Distribution Pdf

Basic assumptions: 

◈ Discrete distribution
◈ Length of the observation period (or area) is fixed in advance
◈ Events occurs at a constant average rate
◈ Occurrences are independent
◈ Rare event

Uses include: 

◈ Number of events in an interval of time (or area) when the events are occurring at a constant rate
◈ Number of items in a batch of random size
◈ Design reliability tests where the failure rate is considered to be constant as a function of usage

Hypergeometric


Shape is similar to Binomial/Poisson distribution.

Basic assumptions:

◈ Discrete distribution
◈ Number of trials are fixed in advance
◈ Just two outcomes for each trial
◈ Trials are independent
◈ Sampling without replacement
◈ This is an exact distribution – the Binomial and Poisson are approximations to this

Other Distributions


There are other distributions – for example, sampling distributions and X2, t and F distributions.

Monday, 15 April 2019

Non-normal Data Needs Alternate Control Chart Approach

Some practitioners mistakenly believe that it is not necessary to transform data before creating an individuals control chart when the underlying process distribution response is not normal. An individuals control chart, however, is not robust to non-normally distributed data. Therefore, it is important to use an alternate control charting approach.

Necessary Transformation


Consider a hypothetical application of the individuals control chart involving an accounts receivable department sending invoices to customers for payment. The difference between payment date and due date often follows a lognormal distribution.

The following data can be considered a random selection of one invoice daily for 1,000 days, where the payment date for the invoice was subtracted from its due date. Therefore, for instance, a positive value of 10 indicates that an invoice payment was 10 days late.

In this example, 1,000 points were randomly generated from a lognormal distribution with a location parameter of 2, a scale parameter of 1 and a threshold of 0 (i.e., lognormal 2, 1, 0). The distribution from which these samples were drawn is shown in Figure 1. In this simplified illustration, it is considered that nobody paid early, where the threshold would be equal to zero. A normal probability plot of the 1,000 sample data points is shown in Figure 2.

Six Sigma Tutorial and Material, Six Sigma Certifications, Six Sigma Learning

Figure 1: Distribution from Which Samples Were Selected

Six Sigma Tutorial and Material, Six Sigma Certifications, Six Sigma Learning

Figure 2: Normal Probability Plot of the Data

From Figure 2, it is possible to reject the null hypothesis of normality technically, because of the low p-value, and physically, because the normal probability plotted data does not follow a straight line. This is also logically consistent with the problem setting, where a normal distribution for the output of such a process is not necessarily expected. A lognormal probability plot of the data is shown in Figure 3.

Six Sigma Tutorial and Material, Six Sigma Certifications, Six Sigma Learning

Figure 3: Lognormal Probability Plot of the Data

From Figure 3, a practitioner would not reject the null hypothesis of the data being from a lognormal distribution because the p-value is not below the criteria of 0.05 and the lognormal probability plotted data tends to follow a straight line. Hence, it is reasonable to model the distribution of this variable as lognormal.

If the individuals control chart is robust to the non-normality of data, an individuals control chart of the randomly generated data should be in statistical control. In the most basic sense, using the simplest run rule (a point is “out of control” when it is beyond the control limits) such data would be expected to give a false alarm three or four times out of 1,000 points, on average. Further, a practitioner could expect false alarms below the lower control limit to be equally likely to occur as false alarms above the upper control limit.

Figure 4 shows an individuals control chart of the randomly generated data.

Six Sigma Tutorial and Material, Six Sigma Certifications, Six Sigma Learning

Figure 4: Individuals Control Chart of the Random Sample Data

The individuals control chart shows many out-of-control points beyond the upper control limit. In addition, the individuals control chart shows a physical lower boundary of 0 for the data, which is well within the lower control limit of -22.9. If no transformation is needed when plotting non-normal data in a control chart, then a practitioner would expect to see a random scatter pattern within the control limits, which is not the case in Figure 4.

Figure 5 shows a control chart using a Box-Cox transformation with a lambda value of 0, the appropriate transformation for lognormally distributed data.

Six Sigma Tutorial and Material, Six Sigma Certifications, Six Sigma Learning

Figure 5: Individuals Control Chart with a Box-Cox Transformation, Lambda Value of 0

This control chart is much better behaved than the control chart in Figure 4. Almost all 1,000 points are in statistical control. The number of false alarms is consistent with the design and definition of the individuals control chart control limits.

Finding the Process Capability Metric


By using a lognormal probability plot, it is possible to determine the best estimate process capability metric output for this fictitious process: 80 percent of all invoices are paid between 2.1 and 27.4 days beyond the due date, with a median of 7.7 days late, when no specification exists (Figure 6).

Six Sigma Tutorial and Material, Six Sigma Certifications, Six Sigma Learning

Figure 6: Lognormal Plot of Data with 80 Percent Frequency of Occurence Rate

If data is not from a normal distribution, an individuals control chart can generate many false signals, leading to unnecessary tampering with the process. When no specifications exist, a best estimate for the 80 percent frequency of occurrence rate, along with median response, is an easy-to-understand description that conveys what the process is expected to produce in terms that everyone can visualize. If a specification exists, then the percentage non-conformance can be determined from the probability plot and be presented as the process capability of the process.

Avoiding Type 1 Errors


The specific distribution used in the prior example, lognormal (2, 1, 0), has an average run length (ARL) of 28 points for type 1 errors (when the null hypothesis is rejected in error). The single sample used showed 33 out-of-control points, close to the estimated value of 28. Considering a less-skewed lognormal distribution, lognormal (4, 0.25, 0), the ARL for false rule one errors drops to 101. Note that a normal distribution will have a type 1 error ARL of around 250.

The lognormal (4, 0.25, 0) distribution passes a normality test more than half the time with samples of 50 points. In one simulation, a majority (75 percent) of the type 1 errors occurred on the samples that tested as non-normal. This result reinforces the conclusion that normality or a near-normal distribution is required for a reasonable use of an individuals chart or a significantly higher type 1 error rate will occur.