Thursday, 31 January 2019

Prince2 vs PMP

What is PMP?


PMP stands for Project Management Professional and PMP a professional certificate from PMI (Project Management Institute). PMI is a USA based non-for-profit organization focused on project management, which has developed and published a number of standards, including:

Prince2 Tutorial and Material, Prince2 Certification, Prince2 Study Materials, PMP Guides, PMP Learning, PMP Certifications

◈ A Guide to the Project Management Body of Knowledge (PMBOK® Guide) : PMBOK
◈ The Standard for Program Management
◈ The Standard for Portfolio Management
◈ Organizational Project Management Maturity Model (OPM3®)

And also a number of practice standards and frameworks:

◈ Practice Standard for Project Risk Management
◈ Practice Standard for Earned Value Management
◈ Practice Standard for Project Configuration Management
◈ Practice Standard for Work Breakdown Structures
◈ Practice Standard for Scheduling
◈ Project Manager Competency Development Framework

The PMBOK guide is the most famous standard among these, and its worldwide use led to the development of three extensions for it, which allows for a more effective use in certain areas of application:

◈ Software Extension to the PMBOK Guide
◈ Construction Extension to the PMBOK Guide
◈ Government Extension to the PMBOK Guide

PMI has recently added a line of practice guides, which includes:

◈ Managing Changes in Organizations: A Practice Guide

PMI Certifications


PMI has a number of popular certifications and these are globally recognised. Getting one of these certification is an easy way to prove your knowledge later on, but is also beneficial as preparations for the exams will give you the chance to learn more and fill in the gaps in your knowledge.

These are the PMI’s certifications:

◈ Project Management Professional (PMP)® – Although based on the PMBOK Guide, the exam questions are about the project management body of knowledge in general, and the PMBOK Guide is just a part of it; well, an important part.
◈ Certified Associate in Project Management (CAPM)® – a simple form of PMP
◈ Program Management Professional (PgMP)®
◈ Portfolio Management Professional (PfMP)SM
◈ PMI Agile Certified Practitioner (PMI-ACP)® – There is no standalone publication for this yet, but there is a syllabus and they award certifications
◈ PMI Risk Management Professional (PMI-RMP)®
◈ PMI Scheduling Professional (PMI-SP)® – A certification useful for project planners
◈ OPM3® Professional Certification

PRINCE2


PRINCE2 is a project management methodology previously owned by the UK’s Cabinet Office. PRINCE2, among other “Best Practices” are now owned by AXELOS, a new joint venture company in the UK.

Prince2 Tutorial and Material, Prince2 Certification, Prince2 Study Materials, PMP Guides, PMP Learning, PMP Certifications

This “Best Practice” family consist of the following:

◈ PRINCE2 : Project management
◈ MSP : Program management
◈ MoP : Portfolio management
◈ M_o_R – Risk management
◈ MoV – Value management
◈ P3O – Project, program, and portfolio management offices (PMO)
◈ P3M3 – Project, program, and portfolio management maturity model
◈ ITIL : IT service management

PRINCE2 Certifications


There are three levels of certification for PRINCE2 however:

◈ PRINCE2 Foundation – the simple one
◈ PRINCE2 Practitioner – the important one
◈ PRINCE2 Professional – just too hard! (don't bother with this)

When people are talking about PRINCE2 certifications, they usually refer to the PRINCE2 Foundation and PRINCE2 Practitioner. The former is easier than PMP and it is unfair to compare them, while the latter is as hard as PMP or, as some people believe, even harder than PMP.

The main source for the both Foundation and Practitioner exams is an official publication named “Managing Successful Projects with PRINCE2” however the official guide should not be used if you are just preparing for the PRINCE2 Foundation Exam as it makes it to difficult.

Tuesday, 29 January 2019

How to survive a zombie apocalypse with PRINCE2®

It’s the spookiest time of year and, naturally, as an accredited training organisation, we thought it was only appropriate that we had a little think about zombies. Hey now, bear with us here – there’s method to our madness or, at least, a methodology…

Prince2 Tutorial and Material, Prince2 Certification, Prince2 Learning, Prince2 Study Materials

A zombie apocalypse epitomises every project manager’s nightmare – no clear plan (other than, y’know… not dying), no clear budget, scope or even visibility on how many team members will turn up each day, and don’t even get us started on the ever-increasing log of risks…

So, how about if we employed the power of PRINCE2® – or more specifically, its seven principles – to help you survive the zombie apocalypse? Can they help you out of your impending sticky situation? Let’s see shall we…

1. Continued business justification


The first principle states that a good project must make good business sense – there should be a return on investment (ROI), and both your use of time and resources should be justified. We feel like this one is a bit of a no-brainer. Your continued business justification and ROI are to stay alive; whatever you are doing is justified and worthwhile if you all come out the other side, potentially minus a few limbs. As obvious as it seems, holding fast to this pearl of wisdom will keep you focussed on the task at hand, and may just save your life.

2. Learn from experience


Now, possibly another obvious one, but if someone died on day one because they forgot to check right, left and right again before crossing the zombie street, it’s probably a good idea to learn from this and avoid it in the future. Equally, everyone knows that you kill a zombie by destroying its brains, so learn from this – don’t go for the knees, that’s how you lose your head…

3. Define roles and responsibilities


Don’t underestimate the power of knowing what you’re doing before you do it. We need a rotating lookout each night, a route-planner, someone collecting weapons and someone to keep an eye on the younger, less-experienced members of the team, at least. And for the love of all things good, we can’t be having Tony and Susan fighting for team leadership in the middle of an all-out zombie brawl.

4. Manage by stages


Difficult tasks are better off broken into manageable chunks. If you’re facing a large-scale escape mission, it will probably help to take a back step and think about things stage by stage. Don’t all run out at once – that’s how Steve died, remember? Break it down – distract the zombies first, then get to shelter on the other side of the street, then get past the zombie hide-out and, finally, make the last leg to the safehouse. Now, have a nice cup of tea and wait for it all to blow over. Job done.

5. Manage by exception


Do not – we repeat, do not – inform the authorities of anything unless there is a serious problem. If the project is going well (or, in this case, not badly) then there shouldn’t be much need for intervention from higher managers. If you give away your position and let the army (or project-management equivalent) come parading in then someone is bound to end up dead. Work the problem out amongst yourselves, have some faith and use your brains… before the zombies do…

6. Focus on products


Everyone should know ahead of time what is expected of the product. In this case, everyone should know that the ‘product’ is your life, essentially, or collective lives. The ideal scenario is that everyone survives, limbs intact, and every team member should be aware of this. These requirements will determine your work activity, not the other way around – like we said, don’t go for the knees… loss of head… not ideal. See what we’re saying?

7. Tailor to environment


Lastly, PRINCE2 is a tailorable methodology. No matter what type of zombie apocalypse you may be facing – be it Dawn of the Dead, 28 days later or I am Legend – you’re good to go. Projects that adapt this methodology to their needs are more likely to succeed than those that don’t. To put this into perspective, zombies have taken on many interpretations over the years, and if you’re trying to use PRINCE2 to tackle land-based zombies when your environment actually calls for defence against water-based zombies, then the outlook is bleak, to say the least.

Thursday, 24 January 2019

Understanding Emotional Intelligence in Leadership

The importance of emotional intelligence (EI) in business, particularly in leadership, is well understood. Leaders with high emotional intelligence can get the most out of their employees. In so doing, they propel projects and businesses to ever greater heights.

Prince2 Tutorial and Materials, Prince2 Certification, Prince2 Study Materials

Leaders without emotional intelligence, on the other hand, often struggle to communicate their intentions and goals to team members. Motivating staff and inspiring confidence can also be a challenge for those lacking a good level of EI.

What is emotional intelligence?


Thanks to its nature, emotional intelligence is hard to define. However, we can loosely describe it as a group of non-cognitive skills and capabilities that help people understand themselves and others.

These skills and capabilities are fairly instinctive. They can make the difference between a good leader and a great one. Unlike technical management skills, emotional intelligence can’t really be taught. Although it’s possible to help people gain a better understanding of their own EI, it takes consistent effort and openness to improve emotional intelligence.

What does emotional intelligence in leadership look like?


Leaders with high emotional intelligence generally find it easier to motivate and communicate with team members. These types of leaders are generally more honest and more inspiring than those who lack emotional intelligence. This often helps them to appear more confident and more capable.

A higher level of self-awareness, empathy and social skills help leaders connect with team members. Keep lines of communication open, and try to really understand what employees mean to say. This helps leaders with high EI manage both small and large teams.

Working for leaders with high emotional intelligence


There are many benefits to working for a leader with high emotional intelligence. For a start, employees are more likely to feel valued and like their views are being listened to. These employees also enjoy inclusive working environments and dynamic company cultures.

Improving your emotional intelligence


It’s true that emotional intelligence is hard to teach. That said, there are a lot of techniques to help improve EI and make leaders more aware of the impact of their emotions. By raising their level of emotional intelligence, leaders can improve self-awareness, communication and empathy. In turn, they’ll get more from their team members.

Leaders who understand the importance of emotional intelligence, and work to improve it where possible, can become more effective and successful.

Tuesday, 22 January 2019

5 Service-based Business Process Mapping Tips

Process Mapping, Six Sigma Guides, Six Sigma Tutorial and Materials

Business process mapping is a great method for understanding the complex processes that impact a business’s bottom line. Comparing a map of how a process is supposed to work and how a process actually works can be revealing. These methods, originally developed to understand manufacturing processes, do not always translate well to service or transactional types of businesses. Here are a few tips to help you better use business process mapping in your service-based business.

1. Accept That Your Business Is Not a Conveyor Belt


Most formal training for business process mapping uses some type of conveyor–belt-style manufacturing process. These processes are typically straightforward with raw materials entering and finished products exiting. Services are rarely this simple. Service processes will have a lot of back and forth between customers and employees. This human interaction will result in special cause variation that can be beneficial to understand. Embrace this and do not try to make your process map look like a conveyor belt.

2. Concentrate on Decision Points


Service processes will typically have many more decision points than you’d see in a manufacturing process. These decision points are a great source of information for your process. These decisions hold the key to understanding inefficiency and ineffectiveness in your processes.

3. Understand the Big Picture AND Get in the Weeds


Given the special cause nature of service processes and the volume of decision points, your process map may look more like a plate of spaghetti than a process map. There is an art to deciding how detailed to get with a process map. Understating the decisions is imperative, but too much complexity can hide problems from view.

4. Get Process Context


Do not map processes in a vacuum. While this is true for both manufacturing and service processes, it is especially true for service processes. The people working the actual process are the only ones that truly understand the complexity of the process. They are the ones who work through the myriad decisions on a daily basis. Having them in the room while you are mapping the process and observing the work will give valuable context to the business process map.

5. Stay Flexible


Use the business process map to help write standard operating procedures. These procedure maps should include the standard, but should also include exceptions to the standard. They will need to describe what should be done during a typical execution, during frequently occurring exceptions and what to do when an unexpected variation occurs.

Where to Start


There are two ways you can start process mapping: from the top down or from the bottom up. Bottom-up process mapping starts with the greatest detail; each iteration becomes more generalized until you can see the big picture. I prefer to start from the top, or generalized, process map and get deeper into the weeds with each iteration. I use a SIPOC (supplier, input, process, output, customer) to understand the process first from a high level. I then map processes between the SIPOC process steps. Gradually add detail to these process maps until I encounter data entropy.

Process Mapping, Six Sigma Guides, Six Sigma Tutorial and Materials

Business process mapping is a valuable tool for improving efficiency and effectiveness. Implementing these five tips given will help you map those processes in a service or transactional business process.

How to Manage Scope Creep

How many times have we initiated a project only to find that as we move forward through project planning and into project execution, more and more changes to the project are required? While change is inevitable throughout almost any project’s lifecycle, if it is not properly managed and documented, it will result in scope creep. Scope creep is the uncontrolled addition of work, requirements, or deliverables to a project which fall outside of the project’s defined scope. A project’s scope defines exactly what a project is intended to accomplish to include all of the requirements and deliverables necessary to achieve the desired result. The project scope is the backbone of the project upon which all of the project’s planning activities are based. While changes may certainly be made to the project, it’s the unmanaged and uncontrolled changes which result in scope creep that must be avoided.

Project Management, Project Management Study Materials, Project Management Guides, Project Management Certifications

So how can scope creep be avoided? What project management tools exist to help us in the project planning phase so we can prevent scope creep from occurring? The key to understanding and preventing scope creep is to realize that the more thoroughly a project is planned, the more prepared the project manager and team will be to avoid scope creep. Fortunately there are several project management tools which help plan, baseline, and manage the project scope as well as manage any requested changes to the scope. These tools are used during the project planning phase and are all components of the formal project management plan.

Scope Baseline: The scope baseline consists of the approved project scope statement, work breakdown structure (WBS), and WBS dictionary. These components allow the project team to establish a baseline against which actual project scope will be measured. It is important that the scope baseline is created through careful and deliberate planning to ensure all project work is captured which supports the agreed upon project scope. The work descriptions must be as detailed as possible because if they’re not, then they’re open to interpretation which invites scope creep. As the project progresses, the team will be able to measure if the project’s work and requirements have been met in order to determine project success.

Scope Management Plan: This plan, also a component of the formal project plan, defines the formal process for how the project scope will be managed and controlled during the project’s lifecycle. It defines authority and responsibility for managing and controlling scope, as well as how scope will be controlled, measured, and verified. By carefully developing this plan and following it closely, it will help the project team avoid the introduction of any scope creep.

Change Management Plan: This plan is an extremely important tool in preventing scope creep on a project. Most projects have many stakeholders who bring many different interests into play. Some may want added functionality beyond what is included in the approved scope baseline. Others may see an opportunity to add work to a project to benefit their organization. By developing and adhering to a formal change management plan, the project team is empowered to consider any proposed change to the project while maintaining the integrity of the scope. If a change is determined to be necessary, it can be carefully managed and communicated and add to the project’s value instead of simply resulting in scope creep.

Configuration Management Plan: This plan defines how changes in project documentation and tools will be managed throughout the project. Configuration management is necessary in order to ensure that all project documentation and tools are managed based on the original project scope and any approved changes to the scope. This proactive approach to managing project documentation ensures that there is consistency between the scope baseline and any changes in project scope while preventing incidents of scope creep finding their way into the documentation.

Requirements Management Plan: This plan defines how project requirements will be identified, analyzed, documented, and managed. It is important during project planning to ensure all requirements are captured. These requirements coincide with the scope baseline and need to be met to ensure a successful project. Failure to capture all requirements or leave requirements poorly defined can result in scope creep as the project moves forward.

We know that through careful project planning we can utilize several components of the formal project plan as tools for preventing scope creep. So once the project moves into the execution phase how can we monitor the scope to ensure we continue working within the approved baseline? Fortunately there are also several project management tools available which aid in this purpose. The first tools are outputs of the Requirements Management Plan. These are the requirements documentation and the requirements traceability matrix.

Requirements Documentation and Traceability Matrix: All project requirements must be documented in order to ensure they can be understood, communicated, and completed in order to complete the project. However, just documenting the requirements is not sufficient for monitoring the project’s scope. There must be a clear understanding of each requirement and accountability or ownership of each requirement. The project manager uses the requirements documentation and traceability matrix to establish understanding and ownership of each requirement and track its status. Work requested or performed outside of the documented requirements and traceability matrix may be an indicator that there is some level of scope creep occurring. These tools provide an organized method for monitoring scope and ensuring all project work supports an approved and documented requirement.

Variance Analysis: Variance analysis is the process of measuring scope performance against the scope baseline. As part of the scope management plan every project should have an acceptable variance within which changes to the scope are not required and outside of which corrective action may be needed. If corrective actions are needed then this may also lead to updating the scope baseline, project plan, or other project documentation which should be done through the change control process. Variance analysis is an effective tool which can be used iteratively throughout the project lifecycle to monitor scope.

We know that project managers and teams work hard to plan for project requirements, milestones, and deliverables while considering all feedback from stakeholders. However, project changes are often needed to include an unplanned event or deliverable or to bring a project back in line to support its original intent. While the project manager wants to avoid unnecessary changes which are often the result of scope creep, necessary changes which benefit the project should be managed through the project’s change management process and incorporated into the project.

When a stakeholder or project team member identifies the need for a change the project manager must have a plan in place to manage the change through a review and approval process and to effectively communicate the change if it is accepted. This process should be detailed in the project’s Change Management Plan. Once the change is proposed the project manager must ensure it is documented and an analysis in conducted to determine the change’s impact on the scope, time, and cost of the project. The proposed change goes to the change control board who determine if the change is necessary and supports the project’s intent. Sometimes the change will require the project’s scope baseline, schedule, or budget to be modified. If the change is accepted by the change control board it is imperative for the project manager to ensure all project documentation is updated as necessary as well as to communicate the change to the stakeholders and project team. This process or carefully reviewing, approving, documenting, and communicating a change is what differentiates managed project changes from unmanaged scope creep.

Managing project scope and preventing scope creep are ongoing tasks the project manager must perform throughout the entire project lifecycle. The scope is the very essence of the project and determines if the outcome is successful or not. Many stakeholders may have many varying interests in a project because of how their work, organizations, budgets, schedules, or resources are affected. Because of this, and the many internal and external influences on projects, great care and consideration must be taken when planning and managing scope. Project managers must understand that all of these competing interests must be considered in order to maintain support for the project. By using the project management tools at their disposal, communicating with stakeholders, and managing scope as opposed to only reacting to scope creep, the project manager can maintain support for his or her project while ensuring all project work directly correlates with the tasks required to successfully complete the project.

Saturday, 19 January 2019

Understanding Scatter Diagrams and Correlation Analysis

Six Sigma scatter diagrams and their correlation analyses often debunk management myths. Many times executives assume and/or presume that measures vary together when they do not. Sometimes they assume and/or presume that measures do not vary in concert with one another when they do. For better or worse, budget forecasts are based on these assumptions. Knowing which factors do and don’t vary together improves forecasting accuracy. Improved forecasts can reduce decision risk.

Being able to quantify the degree of co-variation, called correlation, helps leaders understand whether assumptions are on or off base. The word correlation does not imply or mean, causation. A correlation simply means that two measures tend to vary together. A perfect positive, one-to-one (1:1) correlation has a correlation coefficient of +1. A perfect 1:1 negative correlation has a correlation of -1. Since everything varies, one rarely sees a perfect correlation. If you see a perfect correlation coefficient doubt it.

The following table arrays an older Six Sigma executive’s age and the price of gasoline over the past 50 years. Because the paired recorded data is in sequential order, we can analyze the data. Notice each field is homogeneous; data fields are not mixed together as they would be in a traditional spreadsheet.

Table 1: Age and Gasoline Price Table

Year My Age Gasoline Price 
1950 0 $0.06
1955  $0.12 
1960  10  $0.27
1965  15  $0.15 
1970  20  $0.52 
1975  25  $0.64 
1980  30  $0.76 
1985  35  $0.89 
1990 40  $1.10 
1995  45  $1.19
2000  50  $1.40 

With the data contained in the two columns labeled My Age and Gasoline Price, one can easily create a Scatter diagram using most of the statistical software programs available today. With a bit of advanced training you can add titles for eye appeal.

Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Certification

The linear relationship between the correlation’s coordinate points on the X axis, my age, and the price of gasoline on the Y axis is almost perfect, 0.984. The correlation number, 0.984 is called an r value in Six Sigma jargon. By using the straight black line to coordinate age values on the X axis and price values on the Y axis, what was the price when this executive was 22? What was the price when he was 48? Looking into the future, a process called extrapolation, what would you predict the price of gasoline and the executive’s age will be in 2005?

Did an executive’s age cause the price of gasoline to increase? No. But, the two measures do tend to vary together. As one gets larger, so does the other. This is a linear relationship, meaning the black line in the middle of the chart describes the relationship. It is an easy chart to interpret. The red ‘curved lines’ framing the line are called confidence intervals.

As a rule of thumb a strong correlation or relationship has an r-value range of between 0.85 to 1, or -0.85 to -1. In a moderate correlation, the r-value ranges from 0.75 to 0.85 or, -0.75 to -0.85. In a weak correlation, one that is not a very helpful predictor, r ranges from 0.60 to 0.74 or -0.60 to 0.74. Though an entirely random relationship equals, 0.00, any relationship that has a correlation r-value that is 0.59 and below is not considered to be a reliable predictor.

The scatter diagram below illustrates a case in point. In this enterprise, finance managers assumed that there was a linear relationship, a correlation, between monthly operating expenses and the number of units their factory processed. The shotgun pattern illustrates that the simple linear relationship is so weak, that their predictions were invariably misleading.

Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Certification

The low r value of 0.159 suggests that there was virtually no relationship between these two factors. This insight helped the team focus on other key factors that did matter. The insight gained from Six Sigma statistics saved time and money.

The Product Manager Vs. Project Manager— What Are the Differences & Relations?

Despite the similar names, there are some big differences between project manager and product manager. They are often used interchangeably, but are different disciplines requiring quite different skills. In a nutshell,

Product Manager, Project Manager, Project Management Study Materials

* The product manager is to do the right thing and make sure the product be line with the market demand, giving the company profits the highest priority.

* The project manager is to do the thing in a right way and make sure things conducted perfect, taking time, cost as well as resource constraints into consideration in order to complete the ultimate goal.

Before diving into more details, let’s take a prior look at what is project management and product management in below.

1. What Is Project and Project Management?


Simply speaking, the project is usually temporary work for outputting unique products, services and results. Project management is the application of knowledge, skills, tools and technology within the project activities in order to meet the current needs. This process includes project initiation, planning, execution, monitoring and project closure. As a project manager, the target is to manage the involved people, events and things to establish the project activities through careful planning.

2. What Is Product and Product Management?


Products are anything that can be provided to the market, used and consumed by people, and of course can satisfy people’s needs, including tangible goods, intangible services, organizations, ideas or a combination of these. Project management is the business activity of enterprise’s living organization in the product life cycle, such as product plan, development, marketing, sales and support to manage the business activities. The product manager are mainly responsible for market research, user research and defining the products in accordance with the users’ needs. They also consider the product’s business model, operation and promotion approaches.

3. Project Manager Vs. Product Manager — Differences


Firstly, the project manager requires a technical background, and in the IT company, they have to be very experienced with the responsibility to convert the objectives into quantified and achieved project plan, giving emphasis on management and implementation.

However, the product manager doesn’t require technology knowledge but needs to be knowledgeable in the field. Nowadays in the recruitment requirements for product manager in IT industry, the employees should have relevant work experience, informative in the Internet products, proficient in product design process (function analysis, user role analysis, prototyping, interface development, user testing, etc), with the excellent ability to express, write documentations and skills in using prototype design tools, including Axure, Mockplus or Marvel. The actual development capability is not necessarily required.

Moreover, from the responsibility cycle, the project manager should be responsible for the completion of a project, and then seamlessly switch to another project. However, the product manager should grow with the product, along with countless items.

Product Manager, Project Manager, Project Management Study Materials

4. Project Manager Vs. Product Manager — Relations


The product managers and project managers in fact work in a very close relationship, and it’s hard to set them apart from each other. For instance, e product manager needs to collaborate with the project manager under the relevant progress to ensure the project can meet the final quality and quantity on time.

Thursday, 17 January 2019

Process Owners: The Unsung Heroes of Improvement

Look around a company that has been using Lean Six Sigma for awhile. Listen to the kinds of stories that circulate. Which people are mentioned the most? Likely it is the Black Belt who led a project that saved a million dollars or created a whole new market for a product. Or the Master Black Belt who solved a technical issue that had been bothering the company for years. Undoubtedly the senior executive or Champion who oversees the Six Sigma program or individual projects is getting a lot of credit too.

All well and good. No need to begrudge those people their hard-earned Six Sigma status. But examine more closely the companies which are sustaining the gains from their Lean Six Sigma efforts, and one finds unsung individuals who also are doing an exceptional job – the process owners.

A process owner is the person who has the authority to determine how a process operates, and the responsibility to make sure it continues to meet customer and business needs today and into the future. This is a role that no company can afford to overlook if it wants to be world class.

Responsibilities of an Effective Process Owner


A good process owner knows his or her process like an auto mechanic knows cars. A process owner:

1. Knows what is critical about the process. The process owner must understand what about the output is important to customers of the process and to the business, and must have a thorough understanding of how his or her process fits into the overall scheme of the business:

◈ What processes feed into this process?
◈ What processes rely on output from this process?
◈ What is strategically important about this process? Does it contribute to or support a particular product or service? Does it affect overall operational effectiveness?

2. Monitors process performance with data. This must include both input/process metrics (because they are early predictors of performance) and output measures. In many cases, the process owner is monitoring data compiled by process operators and summarized into a dashboard (Figure 1).

Six Sigma Guides, Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Certifications

Figure 1: Loan Finance Dashboard

3. Makes sure the process is documented, and that the documentation is used and updated regularly. Some organizations have run into trouble by allowing too much variation in how a process 1s performed – each employee having their own particular way of doing business. It is up to the process owner to champion an effort to identify the best-known process methods, particularly what parts of the process must be standardized so that output quality and service to customers do not suffer. Those best-known methods must be documented (with flowcharts and other visual displays) and referenced constantly. (Work groups that do not refer to process documentation usually show more erratic performance than those that do.) If the process changes for good reason, the documentation must change as well.

4. Makes sure a process management (or control) plan is in place. (Figure 2)

◈ Everyone in the work area knows how the process should operate.
◈ Everyone knows how to detect signs of trouble and what to do if a problem appears (often called a response plan).
◈ Process data is charted and posted in the work area, visible to all.

Six Sigma Guides, Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Certifications

Figure 2: Process Management Plan

5. Holds regular reviews. There are two levels of review that the process owner must lead:

◈ A process review – Is the process performing as required by customers and the business? Are the input and output metrics “in control” and “capable”? If not, what are the biggest issues? Who should be assigned to a project team to attack those problems?
◈ A process management review – Is the method of reviewing, monitoring, and managing the process working? If not, what needs to be improved?

6. Makes sure that any improvements identified through projects are incorporated and maintained in the process.

7. Provides linkage to customers, suppliers and other processes. A process owner is the critical link between a process and the rest of the world – both inside and outside the company. More so than any other individual working on the process, the process owner needs to maintain connections in all directions – with supplier processes, with customer processes, with processes above and below in the corporate hierarchy.

8. Makes sure that process operators have the training and resources to do their jobs well. A good process owner is in many ways a servant to the process operators. It is his or her job to figure out what the operators need in order to do their jobs well, and to keep getting better. Those needs can include appropriate training, materials and information.

Tuesday, 15 January 2019

Start with Leaner Tools to Ease Non-Belts into Six Sigma

Six Sigma offers a variety of powerful tools that help organizations make data-driven decisions. Yet most people in an organization do not hold a degree in statistics and may feel that filling out endless data forms is pointless. When first starting a deployment, it is best to make things as easy and painless as possible for the non-Belt community. Once Six Sigma has gained momentum, Belts can enhance the statistical aspect and refine the methods they use.

Six Sigma Study Material, Six Sigma Tutorial and Material, Six Sigma Certification

Here are three examples for leaner tools that could be used to ease process owners and other non-Belts into the method during an initial deployment:

1. Failure Mode and Effects Analysis (FMEA)


If a Six Sigma team does everything manually in the standard FMEA template, it may need to fill in somewhere between 20 and 30 columns per row. To do that, team members may need to get thousands of data records from the process owner. And once the FMEA is complete, will the Champion even care if the risk priority number is 441 or 810?

When starting out, people may not even be capable of telling whether a defect occurs 7 percent or 70 percent of the time. But they do know what you need to be looking for – their most obvious pains. Most likely, the information Belts need from the process owner is this: What and where could something happen? Why would it happen? How bad is it? Who is going to do what about it, and is it effective?

That is a total of seven questions that almost everybody should be able to answer about their process. Asking these questions allows practitioners to get some data quickly, without misunderstanding or redundancy. As the initiative becomes more sophisticated, practitioners can work to refine the FMEA assessment process.

2. Analytic Hierarchy Process (AHP)


The AHP consists of simply going through a list of options and asking for each possible pair, Is (the first) more important than (the second) – and if so, by how much? But it can be tedious for larger amounts of options.

Time can be saved, however, by reviewing and optimizing the list beforehand, removing the unnecessary comparison questions. If the team already knows that gadget production is three times more important than widget production, why ask later if widget production is more important than gadget production? Taking that to the next level: If the team knows that gadget production is a factor three over widget production and that widgets are twice as important as trinkets – why waste stakeholder time by asking whether trinkets beat gadgets?

Optimizing AHP requires a bit of thought and definitely some information technology support. But for Belts doing the AHP on six factors, completing optimization first makes the difference between discussing 30 comparisons or nine. The AHP session may be condensed from two hours to 30 minutes, which key decision makers will appreciate.

3. Quality Function Deployment (QFD)


QFD is a support process for innovation and change, and also helps in assessing the status quo. It is nearly a science, and performs best in the hands of trained experts.

The information needed to first introduce QFD is not necessarily related to interactions, benchmarks and development status. What practitioners really need to know is: who is doing what, and why?

When practitioners know what requirements a process realizes, and what groups are engaged in the operation of the process, they have a solid basis for process improvement. They can still build intricate houses of quality later, when there is at least a formal requirement process.

Create Other Simplified Tools


The list does not stop here. With a small time investment studying a tool, chances are practitioners can find a simplified, leaner version that provides the information Belts really need from process owners in order to produce initial results.

Saturday, 12 January 2019

Analytical Hierarchy Process (AHP) – Getting Oriented

For a tool that has such broad applicability, the analytical hierarchy process (AHP) is not as widely known as might be expected. AHP makes assessments, prioritization and selection among options more readily measurable. Thus it is a natural Six Sigma ally and a part of the toolkit for a growing number of practitioners. AHP, which grows out of work that was done in the field of operations research by mathematician Thomas Saaty, has evolved into a rich set of methods with assessment and prioritization at their core.

The Challenge of Prioritization


When asked to rank or rate a list of things according to some criterion, such as preference, value, risk or cost, one might be able to rank their order and even to assign some numbers to their relative positions on the list. However, two problems arise in that simple scenario:

First, whatever measurement scale is chosen is just ordinal at best. A rating of 10 does not mean the preference, risk or whatever for an item is twice that of an item rated 5. (One might be tempted to treat the numbers as a ratio scale, but there is no basis for it.)

Second, when there are more than a few items on the assessment list, it gets hard to keep all the prioritization considerations in one’s mind at the same time – making it hard to think about and to complete the task.

The AHP Answer


AHP takes that simple-enough looking prioritization problem and makes it simpler and more meaningfully measurable. First, it reduces the list into pairwise comparisons and asks for a ratio assessment of each pair. Using a simple case to illustrate, to assess preference for three features, A, B and C, AHP would set up the three pairwise comparisons (AB, AC and BC).

Making a relative assessment of the members of each pair is something most people find easy to do. Figure 1 traces the preference assessments for three simple requirements – File Type Conversion, Localizability and Compatibility with Legacy System.

Evaluation (Figure 1) shows File Type Conversion is somewhat more important than Localizability, 4.0 transferred to the table below. Compatibility is much more important than Localizability, 9.0 transferred to table. Compatibility is just a little more important than File Type Conversion, 3.0 transferred to table. AHP captures each assessment, and then computes the ratio-scaled priorities and an “inconsistency ratio” of 0.01, as noted in Table 1.

Analytical Hierarchy Process (AHP), Six Sigma Certification, Six Sigma Study Material
Figure 1: Assessment Evaluation

Table 1: Assessments Gathered in the AHP Matrix

File Type Conversion Localizability Compatibility with Legacy System 
File Type Conversion 4.0 3.0
Localizability 9.0
Compatibility with Legacy System
Inconsistency Ratio 0.01

Assessing Inconsistency


An interesting side effect of asking a person to make a series of pairwise ratio-based comparisons is the way that they “forget” prior assessments as they go. If their understanding of the system is coherent, the whole set of pairwise comparisons should stack up in a self-consistent way. In a preference assessment, if a person places A much greater than B, then A slightly greater than C and then B slightly greater than C, they have created a set of circumstances that do not make sense as a whole. They have revealed inconsistency in their thinking on the matter. (See Figure 2.) That could show that a respondent was not paying attention or that they do not understand the dynamics of the assessment well enough to see things clearly.

Analytical Hierarchy Process (AHP), Six Sigma Certification, Six Sigma Study Material
Figure 2: Graphical Display of AHP Assessments

Inconsistency ratios (which involves some matrix math) of greater than about 0.1 are generally viewed as worthy of concern. Ratios smaller than 0.1 reflect a pretty coherent set of assessments. As a companion to AHP preference rankings, the inconsistency ratio provides useful guidance about how to interpret information coming back from an individual or a group.

AHP for Groups


AHP can be especially useful with groups. Each member’s assessments can, of course, be evaluated for priorities and inconsistency, and then the group rollup (and group segments) can be synthesized and viewed the same way (note the second bar graph in Figure 2). This can be a powerful way to build consensus, as each constituent can see where they stand and compare it to the group as a whole. If the group has a high inconsistency ratio (more than 0.1, or so) segmenting might reveal where the differences in agreement are and why. That, too, can help lead to better understanding and consensus.

Thursday, 10 January 2019

Building Valuable Process Maps Takes Skill and Time

Six Sigma Tutorial and Material, Six Sigma Guides, Six Sigma Learning, Six Sigma Certification

Practitioners who think process mapping can be completed in a two-hour session with a group of subject matter experts, a white board and some sticky notes are likely to end up with a nice piece of paper with a bunch of squares and diamonds. This is because process mapping is not for wimps. Creating a process map that tells a full, data-based story requires a decent amount of time and effort by those individuals involved in the process.

Gathering Information


A great process map should show, with certainty, where improvements can be made, where cycle time delays exist and where smooth handoffs are not taking place. Creating a process, or value stream, map should be the first act a company performs when seeking to make process improvements. If they start more advanced process improvement methodologies without completing a value stream map first, organizations may make a slower start on their road to improvement. Of course, practitioners should not avoid these advanced methodologies. But they will benefit from beginning with a process map, which can make an immediate impact – immediate in the sense of less than three months.

Again, process mapping is not an easy undertaking. It is the perfect combination of business acumen and art. It takes special talent to interview individuals and get them to explain exactly what they do in their job every day, as well as share their pains and express their wants. In fact, it takes the ability to connect with many different types of people and personalities, the know-how to ask questions that will effectively prompt the interviewee and the listening skills to understand what a person is saying – without judgment or prejudice.

A skilled practitioner may ask some of the following questions during an interview to capture process owners’ pains and wants:

◈ What parts of the process do you seek to eliminate, and why?
◈ Where do you spend most of your time, and why?
◈ Where in the process do you repeat work? How often, and why?
◈ What does your manager think happens in the process? What really happens?
◈ When pressed for time, what steps in the process do you skip or work around?

But what about the data-based story component? Well, to perform a true value stream mapping exercise, data must be collected in conjunction and concurrently with the interviews. Questions to collect this data may include:

◈ Where do cycle time delays exist?
◈ Where do handoffs take place?
◈ Do people actually hand something off, or is it submitted to a system with the assumption that it is handed off?
◈ What data points are put into systems? What data points are taken out?
◈ What pains does the process cause? What do people want or desire from the process?

Gathering data is the real power of performing process mapping. The master plot, the final map with all the details, is great for showing people the process, but the juicy stuff is in the data that is collected.

Sample Process Map


The figure below is a picture of an end-to-end sales process; in real life it is eight feet long. The green boxes represent steps where cycle time delays exist. The yellow boxes are manual steps where automation can take place. The lines coming in and out of the circles (multiple systems) indicate data that comes in or out of a system.

Six Sigma Tutorial and Material, Six Sigma Guides, Six Sigma Learning, Six Sigma Certification
Sample Process Map

One of the practitioner’s challenges is to identify exactly how many handoffs there are in the process, and how many inputs go into a system but never get taken out. However, the absolute biggest benefit comes from taking steps out of the process. Once changes have been made, practitioners can calculate a return on investment and assign value to each step in the process. 

Five Key Tips


The following are some tips and tricks for process mapping any process in an organization: 

◈ Scope the process: Clearly define a start and stop in the process.

◈ Identify metrics of importance: To give the effort value, practitioners should determine what they want to eliminate from the process – process steps that generate cycle time, steps where individuals seek approvals, steps where individuals perform manual effort and so on. These will become the steps to color code as action items.

◈ Select a map collection method: Process mapping can be performed using sticky notes, a spreadsheet or technical drawing software program, or paper and pen. Practitioners should select the method that works best for them and their organization.

◈ Validate the process maps: After completing a first round of interviews, practitioners should have someone within the organization who is familiar with the process read the maps. This person should check for clarity, content and continuity. The practitioner can review the feedback with the original interviewee for confirmation.

◈ Minimal interviewees at one time: Practitioners should not attempt to create process maps with large groups. It is best to interview one or two people at a time, therefore reducing social conversation and the desire to correct the process during the mapping session.

Tuesday, 8 January 2019

Estimation Method Aids in Analyzing Truncated Data Sets

Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning

When working with data sets, practitioners sometimes encounter metrics, such as out-of-roundness and loss-of-moisture measurements, with physical limits. In these scenarios, the data distribution is truncated at the value of physical limitation, creating a distribution outside of the criteria of a normally distributed population. With non-normal data, estimates and predictions using the normal distribution are not accurate, creating the need for alternative methods of analysis to assess the data.

Standard Methods


Typically, when data does not fit the normal distribution and prediction or estimation calculations are made using the assumption of normality, data is transformed and assessed for normality. If the transformed data fits the normal distribution, then calculations are performed using the transformed data with transformed specification limits. Alternatively, if other distributions are found that fit the non-normal data, the capability of the process can be calculated using an alternative distribution, which better fits the data. However, if no alternative distribution is found that fits the data and the data cannot be transformed into a normally distributed data set, other methods of analysis are necessary.

Alternative Method


Due to the nature of truncated data sets, which have a point of central tendency at a physical limit, common transformation methods such as Box–Cox and Johnson are often not sufficient. The following method of estimating the population’s standard deviation for the normal distribution is a practical method that gives a realistic estimate of the standard deviation. It also avoids violation of the assumption of normality when using the Cpk calculation based on the normal distribution. This correction provides practitioners with the ability to predict the spread of the data and assess capability in the direction of the upper specification limit. Prior to using this correction method, however, practitioners must verify that the sample data is of adequate size to approximate the normal distribution.

Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning

Empirical research and data results, gathered from both theoretical and production data and analysis, support the theory that estimating the standard deviation is possible for physically limited data by proceeding as if the data were not truncated. Theoretically, this means extending the data beyond the physical limitation of the measurement.

The empirical evidence provides a ratio, or correction factor, between the truncated distribution standard deviation and the theoretical normal distribution. The equation is:

Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning

where

Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning
is the standard deviation calculated from the physically limited data set truncating one side of the data.
Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning
 is the standard deviation calculated for the population if the data were not truncated.



The coefficient of 1.7 correlates these two parameters.

This ratio can act as a correction factor for the standard deviation, allowing practitioners to calculate the Cpk based on the assumption of normal data. An accurate calculation of process capability or any other estimate or prediction made using the normal distribution is not valid without this type of correction. In the following example, the standard deviation is estimated for the population using the correction factor.

Example Data Description


In the figure below, the distribution is truncated as it approaches approximately zero readings of moisture. This truncation is due to the physical limitation of the zero bound on a moisture reading (i.e., a product cannot have less than zero units of moisture present). Hence, the data is not able to follow the normal distribution.

Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning

Figure 1: Moisture Readings in Batch 3

This truncation can cause the central tendency measurement to match the physical limit value if that is desirable. With out-of-roundness and loss-of-moisture measurements, often there is only an upper specification limit and it is desirable to have low values, as is the case with the data in Figure 1.

The standard deviation for this example data set is 0.3735 units. The estimated (or corrected) standard deviation for the example data set as a normally distributed data set is 0.3735 units multiplied by 1.7, which is equal to 0.6350 units. The mean from the example data set is 0.2746 units. The specification limit is a one-sided upper specification limit (USL) of 8 units.

The following equation is typically used to calculate process capability:

Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning

USL and LSL are upper and lower specification limits

Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning
 is the population standard deviation

Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning
is the population mean

However, because no LSL exists in this case, the equation is reduced to:

Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning

This alternative process capability estimation can be used for further analysis.

Friday, 4 January 2019

Actionable Information from Soft Data

Engineers, Six Sigma practitioners and other researchers often work with “hard” data – discrete data that can be counted and legitimately expressed as ratios. But what of “soft” data, things like opinions, attitudes and satisfaction? Can statistical process controls (SPC) be applied here? Can process variation in customer satisfaction, for example, be measured and then reported to management in a meaningful way? Can we leverage “appeal,” “responsiveness” or “value for money spent”?

In Visual Explanations, Edward Tufte demonstrates how the NASA Challenger disaster may have been avoided if the Morton Thiokol engineers had displayed their temperature vs. o-ring failure data in a meaningful way. They had all the data they needed – but it did not get translated into information. In a similar fashion, a well-designed survey or comment card will gather a wealth of data. The process of turning soft data into information (assuming the data are valid) is two-fold: knowing what to extract and knowing how to display.

Information Extraction


Visual Inspection and Intuitive Statistics

Visual inspection of data is paramount to understanding it. Raw data, midpoints, ranges, and frequency distributions need to be examined visually before feeding it to a computer for advanced analyses. The need for complete familiarity with the distribution cannot be over stated. Two aspects of data that must be inspected are magnitude and consistency: How much and how many? Inspection will reveal outliers and provide relatively accurate estimations of the median, mean and standard deviation (this requires a bit of practice). The shape of the distribution will indicate if there is a problem with normality.

Data consistency, often overlooked, should also be examined. Consider the situation of an experiment with six sub-comparisons, each one insignificant, but with all six differences pointing in the same direction. The researcher concludes no differences, but six consistent events yields a probability of .016, a rare event in its own right. No matter how good the statistical software, there is no substitute for human intervention at the right point. The foregoing is meant to help the researcher get a “feel” for the data, since a lack of understanding of the data will be easily transmitted to decision makers.

Leverage

Computer-calculated means and variances should be confirmatory at this point, assuming you have at least interval level data (data are rank-ordered, and have equal intervals between the numbers). We can now consider the item means (from a survey, for example) as performance indicators of small, individual processes. The means tell us how well each item is performing. But how do we know which processes are important and which are irrelevant?

In a well-constructed survey, there will always be one item which captures the overall meaning of the survey results: In an employee satisfaction survey, for example, it might be “I like my job” or “I like working here.” All items on the survey should be pointing, somehow, to this bottom line. If we run correlations of each survey item with the bottom line, satisfaction in this example, we can see how well (or poorly) each item relates to satisfaction.

This is leverage: the correlations reveal which items make a difference, and by how much, to overall satisfaction. We can see which items need to be “leveraged”. By plotting a two by two table of Performance vs. Leverage (means vs. correlations), we can see where to focus first in order to 1) fix problems and 2) exploit what we do best. (See Table 1.) Caveat: Correlation does not mean causation, it only means a relationship exists. There may be an intervening variable that is responsible for causation. A root cause analysis, starting with the low performance, high leverage items, should be conducted, after examining process variation (see below).

Six Sigma Study Materials, Six Sigma Guides, Six Sigma Learning, Six Sigma Certifications
Table 1: Leverage Analysis

Process Variation

But what of the variation in these item processes? The coefficient of variation (Cv: the item mean divided by its standard deviation) provides an indicator of process variation for our soft data. It provides information regarding control and consistency. Some items, by their nature, will suggest where to start looking for root causes of problems, but not all. Looking at performance versus process variation may hold a clue for these items. Knowing that, in general, policies and procedures are static and consistent, and that people are dynamic and inconsistent, we can make an initial stab at where to focus on fixing some problems. Consistently low performance suggests a systemic problem, which in turn suggests that policies, procedures, methods, etc., may be a root cause. Any inconsistent (high Cv) performance suggests that people are influencing the variation: training, supervision/leadership, working conditions, etc., are some areas to consider for your fishbone diagram. By plotting the Cv versus performance (means) in a two by two table, the results identify consistently high performance items, consistently low performance items, etc. We now have performance and process variation data charted in a meaningful way (see Table 2). To see the relationship of the Cv to the frequency distribution graphically, see Table 3. This is intuitive.

Six Sigma Study Materials, Six Sigma Guides, Six Sigma Learning, Six Sigma Certifications
Table 2: Process Analysis

Actionable Information


Making Data Understandable

Displaying technically derived data (means, variances, correlations) to decision makers will require explanations that may overshadow and obscure the actual information to be conveyed. For example, explaining that there is a statistically significant difference between a mean of 5.84 and 5.43 on a 7-point survey scale will not promote your mission or your conclusions.

Consider converting everything to percentages: this allows easy comparison across all items, as well as quick evaluation of each item. The numbers above convert to 83 percent and 78 percent, respectively. Everyone can quickly see and evaluate a difference of 5 percent with minimal explanation. The leverage data, currently in the form of correlations, should be converted to shared variance: square the correlation and multiply by 100. The display of an item with 60 percent leverage versus one with 30 percent makes technical explanations unnecessary – the boss can see which one is more important and by how much, and has a good understanding of why. Next, convert the Cv (standard deviation/mean) to a percentage by multiplying by 100. The only explanation required here is “lower is better” (Six Sigma standards will rarely apply to soft data). The beauty of these conversions is that the information contained in the data has not been lost or altered: information integrity remains intact, but now it is understandable at a glance.

The (Almost) Holy Grail

We now have information that is approaching action ability: performance, leverage, and process variation expressed in a recognizable format. If your survey has been well designed, you will also have collected some demographic data (it does not take much). Sort performance, leverage and variation by the demographic data – the derived information will change with each sort, specific to each demographic. We now have target groups.

Using the two by two tables we can demonstrate, by target group, which items are important, in control, performing well, and should be exploited: this is what we do best, capitalize on it. We can also identify which items need to be fixed, and in order of priority. Some items, by their nature, will suggest where to start looking for root causes, but not all. The performance versus process variation table may hold a clue for these items. Knowing that, in general, policies and procedures are static and consistent and that people are dynamic and inconsistent, we can make an initial stab at where to focus on fixing some problems. Consistently low performance suggests a systemic problem, which in turn suggests that policies, procedures, methods, etc., may be a root cause. Any inconsistent (high Cv) performance suggests that people are influencing the variation: training, supervision/leadership, working conditions, etc., are some areas to consider for your fishbone diagram.

Your committee, boss and CEO now have rich information regarding what to exploit, what to fix, and where to look. A question that often arises at this point is, “Anyone have any ideas on how to do this?” If the survey was well designed, it solicited comments in such a way that it greatly increased the chances of garnering actionable ideas: “Give us ONE good idea on how we can improve xxxx.” This is a simple and focused task, rather than a vague request, and tends to elicit actionable responses. Review all comments (data inspection). Review them again, this time looking for themes. Group the comments by theme. Your customers, employees, constituents, etc., can generate a smorgasbord of ideas. Enjoy the buffet.

Once you have a feel for your data, you can run these (relatively) simple analyses and comparisons and display clear and powerful information that provide road maps for action.

Six Sigma Study Materials, Six Sigma Guides, Six Sigma Learning, Six Sigma Certifications
Table 3: Getting a Feel for Data

Thursday, 3 January 2019

Determine The Root Cause: 5 Whys

Asking “Why?” may be a favorite technique of your 3-year-old child in driving you crazy, but it could teach you a valuable Six Sigma quality lesson. The 5 Whys is a technique used in the Analyze phase of the Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control) methodology. It is a great Six Sigma tool that does not involve data segmentation, hypothesis testing, regression or other advanced statistical tools, and in many cases can be completed without a data collection plan.

Six Sigma Tutorial and Material, Six Sigma Guides, Six Sigma Study Materials

By repeatedly asking the question “Why” (five is a good rule of thumb), you can peel away the layers of symptoms which can lead to the root cause of a problem. Very often the ostensible reason for a problem will lead you to another question. Although this technique is called “5 Whys,” you may find that you will need to ask the question fewer or more times than five before you find the issue related to a problem.

Benefits of the 5 Whys


◈ Help identify the root cause of a problem.
◈ Determine the relationship between different root causes of a problem.
◈ One of the simplest tools; easy to complete without statistical analysis.


When Is 5 Whys Most Useful?


◈ When problems involve human factors or interactions.
◈ In day-to-day business life; can be used within or without a Six Sigma project.


How to Complete the 5 Whys


1. Write down the specific problem. Writing the issue helps you formalize the problem and describe it completely. It also helps a team focus on the same problem.

2. Ask Why the problem happens and write the answer down below the problem.

3. If the answer you just provided doesn’t identify the root cause of the problem that you wrote down in Step 1, ask Why again and write that answer down.

4. Loop back to step 3 until the team is in agreement that the problem’s root cause is identified. Again, this may take fewer or more times than five Whys.

5 Whys Examples


Problem Statement: Customers are unhappy because they are being shipped products that don’t meet their specifications.

1. Why are customers being shipped bad products?

– Because manufacturing built the products to a specification that is different from what the customer and the sales person agreed to.

2. Why did manufacturing build the products to a different specification than that of sales?

– Because the sales person expedites work on the shop floor by calling the head of manufacturing directly to begin work. An error happened when the specifications were being communicated or written down.

3. Why does the sales person call the head of manufacturing directly to start work instead of following the procedure established in the company?

– Because the “start work” form requires the sales director’s approval before work can begin and slows the manufacturing process (or stops it when the director is out of the office).

4. Why does the form contain an approval for the sales director?
– Because the sales director needs to be continually updated on sales for discussions with the CEO.

In this case only four Whys were required to find out that a non-value added signature authority is helping to cause a process breakdown.

Let’s take a look at a slightly more humorous example modified from Marc R.’s posting of 5 Whys in the iSixSigma Dictionary.

Problem Statement: You are on your way home from work and your car stops in the middle of the road.

1. Why did your car stop?
– Because it ran out of gas.

2. Why did it run out of gas?
– Because I didn’t buy any gas on my way to work.

3. Why didn’t you buy any gas this morning?
– Because I didn’t have any money.

4. Why didn’t you have any money?
– Because I lost it all last night in a poker game.

5. Why did you lose your money in last night’s poker game?
– Because I’m not very good at “bluffing” when I don’t have a good hand.

As you can see, in both examples the final Why leads the team to a statement (root cause) that the team can take action upon. It is much quicker to come up with a system that keeps the sales director updated on recent sales or teach a person to “bluff” a hand than it is to try to directly solve the stated problems above without further investigation.

5 Whys and the Fishbone Diagram


The 5 Whys can be used individually or as a part of the fishbone (also known as the cause and effect or Ishikawa) diagram. The fishbone diagram helps you explore all potential or real causes that result in a single defect or failure. Once all inputs are established on the fishbone, you can use the 5 Whys technique to drill down to the root causes.