Saturday 29 June 2019

Experiential Learning of Lean Six Sigma with Marbles and Toy Cars

Conducting experiments using marbles and toy cars is a fun way for practitioners to teach students about the use of graphical and analytical tools, and also give them a better understanding of the Six Sigma mantra to “control x, then Y will take care of itself.”

With about 30 years of continual improvement, modern Lean Six Sigma training has reached a highly didactic level in terms of experiential group learning. For example, instructors use such aids as “statapults” and paper helicopters to teach designed experiments and peanuts or M&M’s candies to help introduce students to attribute pass/fail measurement system analysis.

To this list of fun and recognizable teaching aids can be added marbles and toy cars as another enrichment of Lean Six Sigma classroom training. Through experiments using these aids, students can practice graphical and analytical tools, and also gain a better understanding of the Six Sigma mantra to “control x, then Y will take care of itself.”

Prerequisites for Training Exercises


To be suitable for Lean Six Sigma training, an exercise needs to fulfill a range of preconditions:

• It should work with groups of five to 25 people
• It should be highly visual and fun to do, while requiring little expertise
• It should have some hidden snags and pitfalls
• The equipment used in the exercise should be inexpensive and must fit inside the trainer’s suitcase

The marble and toy car experiments fit all the conditions above. The team sizes ideally range from three to five people, and the materials needed are inexpensive and easy to carry. The experiments require nothing more than the following materials:

• Set of marbles (two to three different sizes per team) or toy cars with pull-back motors
• Roll of sticky tape
• Flip-chart paper
• Rulers
• Chairs
• Tape measures

Six Sigma Tutorials and Materials, Six Sigma Certifications, Six Sigma Guides, Six Sigma Tutorials and Materials

Only a few inexpensive items are needed to conduct these classroom learning experiments.

Setup and Introduction


Marble experiment – For this experiment, the object is to have team members roll a series of marbles down a slope and measure where they come to rest on the floor. From there, team members can use Lean Six Sigma tools to analyze the relationship between the height of the chair and the distance the marbles roll, and predict where marbles will end up after future rolls from different heights.

To set up the marble experiment, students should attach a slope made of flip-chart paper to a chair (Figure 1). To have the marble roll down the same way consistently, a concave slope to the paper is preferable. If available, different teams can set up experiments on different types of floors (stone, carpet, wood, etc.).

Figure 1: Setup of the Marble Experiment

Six Sigma Tutorials and Materials, Six Sigma Certifications, Six Sigma Guides, Six Sigma Tutorials and Materials

Instructors can ask teams to develop their own definition for the distance the marble will roll after release from a given height. For example, a team might pick the total rolling distance (including the rolling on the slope) as the measure for distance. In Figure 2, this corresponds to height plus distance. When debriefing after the exercise, the instructor can point to the differences in definitions that the teams pick to emphasize the importance of a consistent operational definition.

Toy car experiment – The object of this exercise is similar to the marble experiment, but involves the rolling of toy cars across a flat surface. After the cars are pulled back and released, teams will use a ruler and a tape measure to determine how far the cars have sprung forward.

Analysis Possibilities


Once the experiments are set up, teams can brainstorm for potential factors influencing the distances that the marbles or toy cars will roll forward. The factors can then be displayed in an Ishikawa, or fishbone, diagram. For the trainer, this offers an opportunity to review the difference between continuous (variable) and discrete (finite) factors that can affect how far the objects will roll.

After rolling the marbles and cars, trainers can then discuss various forms of graphical analysis to interpret the data. For example, scatter plots can be used for analyzing distance versus height, and individual value charts or box plots can be used to track the distances reached by each of three different marbles rolled multiple times from the same height. To help students review their current understanding, instructors should encourage them to ask questions such as “How would the data look if the factor x were significant?” and “What would the data look like if that factor were not significant?”

The marble or toy car experiments also provide practical experience in linear regression. In this case, the length of the tape measure (e.g. 2 meters) should be used for the upper limit of the “operational range.” A lower limit for the operational range of about 20 centimeters (cm) is reasonable because “noise” due to friction and other influences becomes important for small distances.

Within about 20 minutes, teams are able to study the relation between the distance (Y) and the height (x). For that purpose, they can translate the operational range for Y into the range for x’s and develop a data collection plan to cover the full range with as many data points as possible. Figure 2 displays the data from one such experiment.

Before starting a training session, instructors should ask participants to rank their knowledge of Excel or other analysis tools. If several participants consider their knowledge low, trainers can prepare scatter plots on flip-charts and perform an “eyeball-fit” regression analysis, rather than collect and analyze data in Excel or another software program. The eyeball-fit delivers a relation between Y and x, which also allows control of the process.

Eventually teams can compete against each other. For example, instructors can give a target – to roll the marble or drive the toy car to five different distances, such as 20cm, 50cm, 100cm, 130cm or 160cm. Actual distances are then recorded and teams are scored for their “mean squared offset” – the difference squared of actual to target distances. This is averaged for all distances.

The graph in Figure 2 also shows the 95 percent prediction intervals for rolling a marble. When x is set to a given value, the marble can be predicted (with 95 percent confidence) to roll to a distance between the upper and the lower prediction interval. Statistical analysis software can readily calculate these limits. For the data used here, the two intervals are about 25cm (±12.5cm) apart. If “the customer” (i.e., the trainer) wanted a marble to be rolled to a distance of 150cm with a tolerance of ±20cm, the team could then translate these specification limits for distance (Y) into specification limits for height (x). This method is called rational tolerancing.

Figure 2: Prediction Interval Chart for Marble Experiment

Six Sigma Tutorials and Materials, Six Sigma Certifications, Six Sigma Guides, Six Sigma Tutorials and Materials

Hints for the Trainer


The trainer should try the experiments at least once before using them in class. For example, glass marbles can roll extremely far on polished stone floor. The experiment may then be difficult or even impossible to conduct, unless balls made of rubber or some other softer material are used instead.

Because of the four wheels involved instead of just one sphere, the distances that toy cars roll are associated with higher variation. It is interesting to observe how marbles and toy cars differ in the way that they roll down a slope. Instructors should be sure to let teams compare the results.

Experiments with marbles and toy cars are simple to set up and help deliver key messages during Lean Six Sigma training. Because they involve hands-on data collection and analysis, they can be a fun way to teach concepts.

Friday 28 June 2019

What is Strategic Project Management?

While project management takes a project from its starting point to its end, strategic project management looks at the big picture. It links the project to how it benefits the company’s efficiency and competitiveness.

Prince2 Study Materials, Prince2 Certifications, Prince2 Learning

Strategic project management identifies and implements the organisation’s long-terms goals and objectives into the project. With top tier management involvement, it explains why the organisation exists and the context within which it operates.

There are three common components which drive the project to its ultimate goal for the company:

1. Strategic analysis


This forms the basis for which projects an organisation chooses to undertake. Each project needs to link to the organisation’s mission and be key to meeting long-term objectives.

However, bearing in mind that strategic management is about the big picture, it also addresses external factors that could affect progress. Thus, project managers often use strategic analysis tools such as PESTLE to identify potential issues and minimise their impact.

2. Strategic choice


Just how does a company decide which projects to be involved with? Managing multiple projects is a complex task, and something that project managers do in their daily routine. But deciding on the ‘right’ projects is an important step which requires a strategic choice.

Essentially, it means identifying projects that meet the aspirations and expectations of stakeholders, while also playing to the company’s strengths. There’s also a need to identify and take advantage of external opportunities, while avoiding external threats.

3. Strategic implementation


With the scene set, the third stage of strategic management is implementation. Here, strategic project management sets out the long-, medium- and short-term goals for projects and programmes.

Every company wants to grow. So they need to take advantage of opportunities they create for themselves and optimise external influences. Strategic implementation examines all kinds of benefits, including:

◈ The use and benefits of collaborative tools in projects

◈ How people and resources are assigned

◈ The ‘why?’ of projects, not just at a base level, but from the top of a company.

Measuring the success of strategic project management


Any strategy and project within the ‘bigger picture’ needs to have indicators to measure success. The same is true for strategic project management.

Strategic project managers often use these four categories of performance measurement:

1. Finance
2. Customer
3. Learning and growth
4. Internal business processes.

Essentially, they provide the basis for defining objectives for programmes, portfolios and projects.

Is strategic project management important?


Yes, and it is a clear benefit to an organisation as it defines its growth path. There needs to be a close, symbiotic relationship between strategic project management and management ‘on the ground’. That’s why it’s an integral part of strong project management leadership skills.

Thursday 27 June 2019

Harvesting Value in Transactional Processes with Lean Six Sigma

In an era of high competition, with many companies facing a less-than-certain future, the need to increase performance in the eyes of the customer has never been stronger. Transactional processes play a major role in any company’s interaction with the customer, which makes them excellent candidates for Lean Six Sigma improvements. Methods and tools that have previously been very successful over many decades in the manufacturing sector can also be very effective in transactional environments.

Transactional vs. Manufacturing


Before practitioners begin deploying Lean Six Sigma in a transactional environment, they should become familiar with the inherent similarities and differences between the two process types. Compared with manufacturing, transactional processes tend to have:

◈ Decision-based business processes rather than fixed processes.
◈ Work activities that are more interdependent than linear.
◈ A strong need for fast adaptability as customers increasingly demand change.
◈ Many more workflows and cross-functional interactions.
◈ Process capabilities that are evaluated by the strength and frequency of customer complaints rather than objective metrics collected during processing.
◈ Processes that take place across multiple geographical locations due to historical acquisitions and prudent risk management practices.
◈ Inventory that is less visible, mostly in the form of electronic data and correspondence rather than physical items.

As a result of these differences:

◈ Staff executing transactional processes need to be more empowered so decision making can happen fast.
◈ The majority of the defects are at the workflow level rather than the individual task level.
◈ Defects are relatively invisible and not evident until experienced by the customer.

Since, historically, the transactional business sector has had less experience with adopting data-driven Lean Six Sigma activities, formalization of improvement programs can generate more resistance across many levels of the organization than they would in a manufacturing operation. In order to demonstrate the value of business process improvement quickly, some unique characteristics need to be understood.

Characteristics of Transactional Processes


Many transactional processes contain the combination of an execution step followed by an approval step. For illustrative purposes, the execution time (Figure 1) has a mean of four hours (with a standard deviation of one hour) followed by an approval step (Figure 2) with a mean of 12 hours. Typically, the cycle times for execution steps follow a normal distribution, while those with a waiting or queuing process follow an exponential distribution.

Figure 1: Execution Step

Six Sigma Certifications, Six Sigma Guides, Six Sigma Tutorials and Materials, Six Sigma Learning

Figure 2: Approval Step

Six Sigma Certifications, Six Sigma Guides, Six Sigma Tutorials and Materials, Six Sigma Learning

A simple simulation in Figure 3 shows that total cycle time (combining both the execution and the approval steps) at the the 50 percent point is 12 hours. However the cycle time is 32 hours at the 90 percent point and 40 hours at the 95 percent point.

Figure 3: Total Cycle Time

Six Sigma Certifications, Six Sigma Guides, Six Sigma Tutorials and Materials, Six Sigma Learning

As shown in Figure 3, the presence of this extended tail of the distribution will clearly add to the dissatisfaction level of the customers. From this data, it is evident that, to improve overall delivery performance, emphasis should be placed on reducing the duration of the approval step rather than the usually observed organizational behavior of reducing the execution time.

Often this sequence of execution and approval steps appears multiple times in a business process. Figure 4 and Table 1 show the results for one, two and three successive cycles of the example process that has a mean of four hours (one-hour standard deviation) for the execution step and a mean of 12 hours for the approval step.

Figure 4: Comparison of Multiple Execution/Approval Cycles

Six Sigma Certifications, Six Sigma Guides, Six Sigma Tutorials and Materials, Six Sigma Learning

Table 1: Summary of Execution/Approval Cycles

Summary

50%
90% 
95% 
One cycle
12
32
40
Two cycles 
25 
63 
80 
Three cycles
37 
95 
120 

For each additional cycle, the 50 percent point moves out by 12 hours, but the 90 and 95 percent points move out by roughly 32 and 40 hours respectively. This data illustrates two points:

1. This additional wait time that previously may have been attributed to lackadaisical management practices is simply an outcome of the interaction between these two different types of distribution. The performance of these approval processes, or even the need for these approval processes, should be reviewed and addressed.

2. Because customers are much more sensitive to delays rather than early completion, it is more important to improve the performance of the tail of the distribution, at the 90, 95 and 99 percent points. This metric is much more meaningful than using overall process averages that are used to define performance in service-level agreements.

Tool Flow in a Transactional Environment


Transactional processes do not have the same physical characteristics as operational processes, but the volume and complexity of transactions that have to be executed by financial, healthcare and other data processing departments still require the rigor of Lean Six Sigma.

To address underperformance, a basic framework of five tools should be introduced: a project charter, a SIPOC chart (Suppliers, Inputs, Process, Outputs, Customers), a value stream map, a cause-and-effect diagram and selection matrix, and a process-failure mode and effects (P-FMEA) analysis. The action items selected from the P-FMEA can then be executed using the project management methods and resources present in the organization, and control plans can be introduced. Here are some basic descriptions of the five tools and their utility in the transactional arena:

Project charter – This document provides a detailed description of the business needs that will be addressed in the project. The charter serves as the link to the business problem, the financial impact and the identification of the process owner, and adds clarity and a sense of common purpose to the project.

SIPOC diagram – This tool is a high-level process map that identifies all relevant elements of a process improvement project before work begins. Often there is no common view of all the process inputs and outputs required. Inputs are gathered serially rather than in parallel, and outputs are provided on request rather than as a structured part of the process output. The SIPOC provides a common high-level view of the process to be improved – something that can become lost in a geographically disperse organization.

Value stream map – Typically used in Lean projects, this pencil-and-paper tool helps practitioners see and understand the flow of material and information as a product or services makes its way through the value stream. By collectively visualizing the entire workflow and introducing the concept of rolled yield and rework, these maps provide insight into the areas that need to be addressed.

Transactional processes are prone to develop numerous rework loops, and unless quantified and controlled, the actual performance level can degrade dramatically. This is not visible to the wider organization that uses a simple linear model. Some of the greatest opportunities for driving process excellence can be found by using value stream mapping to analyze the gaps between functional groups.

Cause-and-effect diagram and selection matrix – Also called an Ishikawa, or fishbone, diagram, the cause-and-effect diagram is a visual tool used to organize possible causes for a specific problem into logical groups and to help identify root causes. This is an opportunity for all involved to provide ideas about issues that affect the whole process. Many effects that arise are caused by prior actions of individuals who have not visualized the downstream consequences. Often, feedback systems do not exist all, so the same issues continually recur. The formality of the process brings individuals together who would not otherwise meet in a problem-solving environment.

The value of establishing a common understanding of the problem, of possible solutions and of possible implementations is extremely important in these environments. Transactional processes tend to involve multiple geographical locations involving people who only ever meet by phone or email to execute actions. Coming together to look at the entire process simply does not take place without the rigor of these types of initiatives. By using these cause-and-effect diagrams and selection matrices, the team can move into the execution stage with the confidence that they are focusing in the right areas.

P-FMEA – This tool, which helps identify and prevent potential problems before they occur, provides a structured approach to the myriad ideas that exist within the organization. This is especially powerful in a geograhically distributed organization, where coming together to develop common assessment criteria is not normal practice. Once these ideas are selected and resources allocated, the implementation phase can begin.

Gaining Better Insight


The advantage of this simple sequence of tools is that the training can be carried out by just-in-time learning and facilitated by cross-functional teams led by Black Belts. Results can be obtained quickly, providing insight into cycle-time reduction and process improvement across various transactional environments. Once these insights are understood and the results demonstrated, the opportunities to better serve the customer and develop top-line growth can be made visible and the benefits realized by the broader application of these techniques.

Wednesday 26 June 2019

Top Ten Six Sigma Black Belt Candidate Qualities

Six Sigma Black Belt, Six Sigma Tutorials and Materials, Six Sigma Guides, Six Sigma Certifications

Whether you are a process owner, Master Black Belt or Champion, you will at some point need to interview candidates for an open Black Belt position. Or, you may be a Green Belt or quality engineer with aspirations of someday working full-time as a Six Sigma Black Belt. What should you look for in a Black Belt candidate or how should you develop yourself for a Black Belt position? This article will address these questions.

The Role of the Six Sigma Black Belt


Six Sigma Black Belts are most often referred to as change agents, and there is no doubt that the Black Belt role is a leadership position within an organization (please note that I intentionally did not say “within the quality department or Six Sigma organization”). Black Belts are full-time Six Sigma project team leaders responsible for implementing process improvement projects (DMAIC [Define, Measure, Analyze, Improve, Control] or DFSS [Design for Six Sigma]) within the business. Black Belts are knowledgeable and highly skilled in the use of the Six Sigma methodologies and tools, as well as facilitation and change management, and lead subject matter experts to increase customer satisfaction levels and business productivity.

Read More: IASSC Certified Lean Six Sigma Black Belt (ICBB)

Black Belts have typically completed four weeks of Six Sigma training, and have demonstrated mastery of the subject matter through the completion of project(s) and an exam. Black Belts coach Green Belts and receive coaching and support from Master Black Belts. It is generally expected that a Black Belt will move into a Master Black Belt or significant business role after the Black Belt assignment is completed in 18 months to three years.

Six Sigma Black Belt Qualities


Six Sigma Black Belt, Six Sigma Tutorials and Materials, Six Sigma Guides, Six Sigma Certifications
So, what should you look for in your next Black Belt? Here is my personal top ten list. You will notice that I bulletized the list instead of numbering it. This was done on purpose, as a numbered list usually indicates that one point might be more valuable than another. In this case, all ten qualities are considered essential and should have equal weighting.
  • Customer Advocacy. Black Belts should readily communicate the understanding that customers are always the recipients of processes, and that customers (both internal or external) are always the final judge of product or service quality. Understanding customer needs (“CTQs,” although they may not know the Six Sigma language yet) is the key to process improvement. Hence, a Black Belt candidate should speak clearly about how eliminating process variation is a key to business improvement.
  • Passion. No cold fish are welcomed in Six Sigma. Black Belts must be self-motivated, have initiative, and have a positive personality. At times they are expected to be a cheerleader, to pick up the team and help them move forward productively. Passion also gives them fortitude to persevere, even when the going may get tough on a project.
  • Change Leadership. Black Belts have demonstrated performance as a change agent in the past, regardless of their job duties. During the interview, ask them how they challenged the status quo in their last role. They didn’t?…well, they may not be the right person for your Black Belt position. Changing the organization and how business is accomplished may upset employees; change agents and change leaders have a way of accomplishing positive change while engendering support for the change.
  • Communication. Black Belts are effective communicators, which is essential for the many roles they serve: trainers, coaches, and mentors. Black Belts should be able understandably speak to all audiences (from shop floor employees to executive management). Understanding the various needs of audience members and tailoring the message to address their concerns is the mark of an effective communicator. Once a Black Belt has these qualities, creating Power Point presentation slides (a requirement in corporate America, right?) is a snap.
  • Business Acumen. Black Belts are business leaders, not the quality managers of the past. As such, they should have business knowledge and the ability to display the linkage between projects and desired business results. How is a project making the company stronger competitively and financially? You can ask questions during the interview to determine if the Black Belt candidates have made this connection in their prior roles.
  • Project Management. Six Sigma is accomplished one project at a time. We should not lose sight of the fact that the Black Belt must manage projects from scope, requirements, resources, timeline, and variance perspectives. Knowledge of project management fundamentals and experience managing projects are essential.
  • Technical Aptitude. The Black Belt candidate need not be an engineering or statistical graduate, but in some cases this is beneficial – provided the other top ten qualities listed are also present. In all cases, a Black Belt is required to collect and analyze data for determining an improvement strategy. Without some technical aptitude (computer/software literacy and analytical skills) the Black Belt will be frustrated in this role.
  • Team Player and Leader. Black Belts must possess the ability to lead, work with teams, be part of a team, and understand team dynamics (forming, storming, norming, performing1). In order to effectively lead a team, a Black Belt must be likeable, get along with people, have good influencing skills, and motivate others.
  • Result Oriented. Black Belts are expected to perform and produce tangible financial results for the business. They must be hard working and quick to demonstrate success.
  • Fun. Black Belts should enjoy their jobs if they are passionate about them. By having fun, you encourage others to do the same.

Qualities that Did Not Make the Top Ten (But Are Important)

  • Trust and Integrity. It almost goes without saying, but if I didn’t list these two qualities someone would have emailed me. These are requirements and are non-negotiable.
  • Deep Process Knowledge. Six Sigma involves having a team of subject matter experts working to eliminate defects and improve a process. Obviously, someone on the team must have a deep knowledge of the process being investigated. This does not have to be the Black Belt, but it can be.
  • Been There, Done That. Sometimes a team gives credibility to a Black Belt that has “been through it.” When the team is forming, this can help accelerate the acceptance of the Black Belt, but it’s not a requirement.
  • Knows Six Sigma, ISO, TQM, Etc. Remember, you are building your business leadership pipeline one Black Belt at a time. Having a specific and detailed knowledge of Six Sigma is not a prerequisite – they will go through training; having the top ten list of qualities for a Black Belt (listed above) is.
  • Diverse Work Experience. This will enable the Black Belt to appreciate more than just one aspect of a process improvement project. For example, if Black Belts are fresh out of a statistics college program, they are likely to predominantly utilize newly acquired skills and tools. Black Belts with a diverse background can appreciate projects and issues more holistically.
  • A Degree. While having a degree supports the idea that a person has developed independent thinking skills, not having a degree does not imply that the Black Belt candidate does not have independent thinking skills. This quality is very debatable as I have seen excellent Black Belts with and without degrees.

Tuesday 25 June 2019

Risk Management: Objectives, Advantages And Disadvantages

Risk Management, Project Management, PMP Exam, PMP Guides, PMP Learning

Risk Management is the identification, assessment, and prioritization of risks followed by coordinated and economical application of resources to minimize, monitor, and control the probability and/or impact of unfortunate events. Risks can come from uncertainty in financial markets, project failures, legal liabilities, credit risks, accidents, natural causes and disasters as well as deliberate attacks from an adversary.

Risk management is the acceptance of responsibility for recognizing, identifying, and controlling the exposures to loss or injury which are created by the activities of the University. By contrast, insurance management involves responsibility for only those risks which are actually insured against.

Meaning and Definition of Risk Management


According to Jorion___” Risk management is the process by which various risk exposures are identified, measured and controlled. Our understanding of risk has been much improved by the development of derivatives markets”.

Accordingly, the term ‘risk management’ refers to the systematic application of principles, approach, and processes to the tasks of identifying and assessing risks, and then planning and implementing risk responses. This provides a disciplined environment for proactive decision-making.

Risk Management, Project Management, PMP Exam, PMP Guides, PMP Learning

Objectives of Risk Management


1. Ensure the management of risk is consistent with and supports the achievement of the strategic and corporate objectives.

2. Provide a high-quality service to customers.

3. Initiate action to prevent or reduce the adverse effects of risk.

4. Minimize the human costs of risks, Where reasonably practicable.

5. Meet statutory and legal obligations.

6. Minimize the financial and other negative consequences of losses and claims.

7. Minimize the risks associated with new developments and activities.

8. Be able to inform decisions and make choices on possible outcomes.

Advantages of Risk Management


1. It encourages the firm to think about its threats. In particular, risk management encourages it to analyze risks that might otherwise be overlooked.

2. In clarifying the risks, it encourages the firm to be better prepared. In other words, it helps the firm to manage itself better.

3. It lets the organization prioritize its investment and reduces internal disputes about how money should be spent.

4. It reduces duplication of systems. Integration of environmental and health and safety systems are one instance.

Disadvantages of Risk Management


1. Qualitative risk assessment is subjective and lacks consistency.

2. Unlikely events do occur but if the risk is unlikely enough to occur is maybe better to simply retain the risk and deal with the result if the loss does in fact occur.

3. Spending too much time assessing and managing unlikely risks can divert resources that could be used more profitably.

Monday 24 June 2019

10 Differences Between Agile and Waterfall Methodology

The traditional waterfall methodology for software development is rapidly losing its popularity as Agile methodology is now being increasingly adopted by companies worldwide for software development.

Agile, Waterfall Methodology, Six Sigma Tutorials and Materials, Six Sigma Certifications

Waterfall basically is a sequential model where software development is segregated into a sequence of pre -defined phases – including feasibility, planning, design, build, test, production, and support. On the other hand, Agile development methodology follows a linear sequential approach while providing flexibility for changing project requirements, as they occur.

Here are the top 10 differences between Agile and Waterfall Methodology:
  1. The software development process is divided into different phases in the Waterfall model while Agile methodology segregates the project development lifecycle into sprints
  2. Waterfall is a structured software development methodology, and often times can be quite rigid, whereas the Agile methodology is known for its flexibility
  3. According to the Waterfall model, software development is to be completed as one single project, which is then divided into different phases, with each phase appearing only once during the SDLC. However, the Agile methodology can be considered as a collection of many different projects, which are nothing but the iterations of the different phases focusing on improving the overall software quality with feedbacks from users or the QA team
  4. If you want to use the Waterfall model for software development, then you have to be clear with all the development requirements beforehand as there is no scope of changing the requirements once the project development starts. The Agile methodology, on the other hand, is quite flexible, and allows for changes to be made in the project development requirements even after the initial planning has been completed
  5. All the project development phases such as designing, development, testing, etc. are completed once in the Waterfall model while as part of the Agile methodology, they follow an iterative development approach. As a result, planning, development, prototyping and other software development phases can appear more than once during the entire SDLC
  6. One of the major differences between Agile and Waterfall development methodology is their individual approach towards quality and testing. In the Waterfall model, the “Testing” phase comes after the “Build” phase, but, in the Agile methodology, testing is typically performed concurrently with programming or at least in the same iteration as programming
  7. While Waterfall methodology is an internal process and does not require the participation of customers, the Agile software development approach focuses on customer satisfaction and thus, involves the participation of customers throughout the development phase
  8. The Waterfall model can be regarded as a stringently sequential process, however, the Agile methodology is a highly collaborative software development process, thereby leading to better team input and faster problem solving
  9. The Waterfall model is best suited for projects which have clearly defined requirements and in which change is not expected at all, while Agile development supports a process in which the requirements are expected to change and evolve. Thus, if you are planning to develop a software that would require frequent overhauls and has to keep up with the technology landscape and customer requirements, Agile is the best approach to follow
  10. The Waterfall model exhibits a project mindset and lays its focus strictly on the completion of project development, while Agile introduces a product mindset that focuses on ensuring that the developed product satisfies its end customers, and changes itself as the requisites of customers change

Saturday 22 June 2019

Incorporate Agile into DMADV


Agile was once solely the realm of programming and software development. It was used widely in the U.S. military, especially the Air Force, but it moved into project management around 2000. As most quality initiatives are managed as projects, it made sense that Agile began morphing into quality management around 2010. Based on the ideas of adaptive planning, evolutionary development, early delivery and continuous improvement, Agile encourages rapid and flexible response to change.

It became transactional in that Agile also focuses on the management of people and their interactions rather than the processes or tools of the methodology they are following. Agile seeks to continuously collaborate with the customer in the development of the product, while responding to changes—immediately. External/regional/military terminologies have also been incorporated, making communication within Agile difficult at times.

A Look at Agile


Agile itself is different from other change management philosophies that abhor scope creep or even customer inputs (after the initial collection of parameters). Agile is meant to be iterative and adaptive using sprints, creating a rolling wave of milestones that can be surfed (riding the rolling wave) with a consistent fetch (flow direction) but leaving flexibility for the team to fulfill the requirements. This is the opposite of waterfall where once you have gone past a certain point, it is difficult to move back upstream even if there is a fetch switch (change in flow direction). Part of the risk matrix must look at the potential for shoaling (having the work pile up toward the end of the project requiring resource crashing) or the advent of a rogue wave (movements from multiple directions combining to create a scenario where the flow is so large that it cannot be handled no matter the amount of resources used) leading to odzi or anine (failure points or project cancelation) or All Pau (negatively over).

Agile terminology has changed through the years related to discussions about the velocity and inertia of a project as well as how to properly forecast the workflow. Agile language changed more as the methodology was incorporated into quality management and its user base expanded. For example, some of the terms highlighted above are from the surfing community, remnants of the Native American military code talkers with a little Hawaiian mixed in.

An important task for the continuous improvement practitioner is to ensure that when a term is used, everyone in the group understands what is being discussed! As you can see in just this section above, the terminology is likely to take a little getting used to.

As a process of quality management, Agile can be most readily used inside of a DMADV (Define, Measure, Analyze, Design, Verify) project. It can be used to address the needs that arise when the use of software or automation are the overriding factor in the fulfillment of customer desires – most often when moving from manual handling to a specifically-designed mechanization of the system/process.

What Is a Sprint?


From a quality management perspective, a sprint is an iteration of work and is similar to a Kaizen/Kaikaku engagement where there is a specific set of inputs and expected outputs to fulfill a singular purpose. However, in a sprint, there is an increased amount of effort brought to bear (similar in project management to crashing a project) and a hard-and-fast timeline that must be adhered to (timeboxing). A sprint is normally 30 days in length but can be as short as a week or as long as six weeks. But the biggest difference is that while a Kaizen produces an incremental improvement, the outcome of a sprint may be something reutilized or a completely new design.

Incorporating a Daily Scrum


In software development, a Scrum meeting is expected to occur each morning to make sure everyone is on the same page and timeline about 1) the work process and 2) what is being targeted for completion in the short term. The daily Scrum meeting should not take more than 15 minutes. The daily Scrum is more than just a status update; it’s a pulse check that should illuminate any impediments that are slowing the team’s progress.

During the daily Scrum, each member of the development team should briefly answer the following questions:

◈ What did you do yesterday?
◈ What will you do today?
◈ Are there any impediments in the way?

Each participant in this Scrum meeting should listen to the others and remain present through the entirety of the meeting. Often, members of the development team will identify opportunities to work together during the day based on discussions during the daily Scrum.

In quality management, a Scrum may only take place weekly or at the end of a sprint cycle. The Scrum Master will set the tempo of the frequency of the Scrum meetings and the desired outcomes. The Scrum Master uses information on the overall project parameters and knowledge of the burndown rate and sprint schedule to drive the overall schedule.

A Scrum meeting is a level-setting meeting used to ensure that all team members are up to speed on the current state of the project and perform a Scrum to determine if the project is ready to progress onward or if additional work must be completed before forward momentum is resumed.

A Scrum is an engagement where those team members who desire to press forward are faced by those who believe the product is not ready for the next iteration. This is similar to a forcefield analysis but is done on the fly verbally with the Scrum Master making the final determination. The side with the more powerful argument usually wins. If there are issues that need to be reworked in order to properly function, the Scrum Master may have the folks who were against moving forward accomplish a short parallel play or parallel sprint to bring the concerns to completion, while the rest of the team takes the next sprint. (That next sprint is usually decided though a Scrum and selected from a prioritized backlog.)

Burndown rate is based on the amount of human resources assigned to the project (based on knowledge-skills-activities [KSA] and work-breakdown-structure [WBS] and initially allocated through a backward-pass analysis of the known tasks) that can then be parceled out to fulfill tasks. The tasks (backlog or set of tasks in waiting) are often set up in a kanban system from the WBS (some call it a scrumban) where those tasks that are sequential are pulled into queue for work to be accomplished in order. Those that are to be done in parallel are pulled at the same time to begin the work, although they may be completed at different rates. The kanban is based on interconnected variables and dependencies and this is why the Scrum decision is so important – as is the knowledge/subject matter expertise of the Scrum Master.

Agile Scrum


Agile Scrum is Lean because it builds from the current state where value is mapped to fulfill a future state through story-driven modeling of outcomes, as they become known. The model changes in an Agile fashion with each input to derive the future state. The story is based on what the customer desires their experience with the system to be like, which often changes (volatility in which the backlog is refined). Then the system is designed, developed, fabricated, tested and deployed to provide that experience. This is accomplished by exploratory-through-adoption testing or run-pass or run-fail exercises where each fail becomes a “learning” that drives a bridge or fulfillment to close the gap.

A running earned value computation (normally used in project management) can be used to keep the Scrum Master informed as to how much of his allotted resources have been used while tracking time, cost and quality at the end of each sprint cycle. Then the actual costs can be compared with the planned value at that point and the earned value of the project known. The burndown rate and project schedule backlog (task definition list of tasks from the WBS) show if a project is on track and if the Scrum Master will have enough resources to finish the work or what the cost will be if additional resources are needed. If the project or segments thereof are running behind schedule, the Scrum Master may have to “crash” the project with additional resources in order to get it back on track before the next sprint is set to commence. This might only be identified during Scrum meetings and it is why they are usually held daily so that problems may be rectified before the current sprint iteration is complete.

At the end of the project a Scrum of Scrums will occur to determine if all of the project parameters have been met and are functional. This is similar to a Control/Verify phase tollgate in Six Sigma, but may require demonstration of the system to the stakeholders and customers to ensure verification and validation of the system or if retroactive action is needed.

The benefits of using Agile within your quality management functional work are easy to see. If you have not yet used Agile in a DMADV, give it a shot! It can drive positive results.

Friday 21 June 2019

Project Management in the Women's World Cup

Project Management, Prince2 Study Materials, Prince2 Certifications, Prince2 Learning

PRINCE2 is a proven and effective project management tool, used in all kinds of projects. That includes sporting events much like the highly anticipated Women’s World Cup. The 2019 tournament started in France on 7 June and is scheduled to conclude at the start of July. Every four years since 1991, the FIFA’s Women’s World Cup, like most international sporting tournaments, is a feat of project management. What lessons can be learnt from project managing this tournament played out on the global stage?

#1 Start planning early


The bigger the project, the earlier the planning phase needs to start. The 2023 Women’s World Cup host won’t be announced until March 2020 but it still gives the host nation – chosen from several bids, including Australia, New Zealand, North and South Korea and Bolivia to name a few – three years to plan, organise and implement the tournament.

Essentially, everything that could go wrong usually does when managing a massive project like this.

For example, games are played across a variety of stadiums, so there’s more travel involved for competing teams. Spreading the games across several locations and cities makes for all kind of logistical headaches.

Throw in elements that no project manager can control – mainly the weather, but also medical issues, illnesses and so on – and you can see a myriad of project management issues opening up.

Delays, or the threat of delays, can bring a project like the Women’s World Cup to an embarrassing standstill. With the right project management leadership skills however, a thorough risk assessment analysis will prevent many of these delays and pressures.

#2 Don’t overcomplicate the project


The phrase ‘don’t re-invent the wheel’ is one we hear often. With huge sporting tournaments such as this, it’s tempting for the host nation to try to make their mark. While a competitive spirit on the field is always needed, off the pitch, it can become an obstacle, especially within the project management realm.

For example, when Brazil hosted a previous Women’s World Cup, they decided to position themselves as ‘the’ footballing nation, embarking on an ambitious stadium building project as well as extending airports. However, the construction phase was dogged by problems and issues. Thus, in challenging themselves to build seven new stadiums, they were overstretched.

On the other hand, Canada, when they hosted the tournament after Brazil, didn’t embark on any ambitious new building projects. They utilised what they already had, choosing instead to face down the challenges of transport and logistics of not just teams, but record crowds too.

The lesson here is simple: in the planning project management phase, use project management software to determine the capabilities of a country or organisation to resource the project. If you already have the resources, why create more?

#3 Don’t forget your stakeholders


In any project, there are many stakeholders. In the FIFA Women’s World Cup, there is not just FIFA itself, but also, just to name a few:

◈ The host nation
◈ The football teams and their players
◈ Corporate sponsors such as Coca-Cola, Qatar Airways and Adidas, amongst others.

Not working in collaboration or including stakeholders in project management decisions can lead to all kinds of issues. Some of these issues can embarrass the corporate body, the hosting nation and sponsors. For example, early in the 2015 Women’s World Cup in Canada, games were played on artificial grass surfaces, which can contribute to player injury. The players mounted a lawsuit against FIFA and although it was ultimately resolved, the embarrassment it caused was not welcomed by anyone. It was just one controversy that dogged project management aspects of the 2015 tournament.

With the right leadership skills, project managers will understand the need to treat stakeholders with care to avoid fiascos and embarrassing incidents. Essentially, this is about communication between all parties involved. By placing communication at the heart of project management, the final product is closer to what everyone had in mind, from large corporate sponsors to the players themselves.

#4 Be realistic


Saying ‘no’ is hard to do, even for seasoned, qualified project managers. Brazil made big promises as part of their bid for hosting the tournament in 2011. And the promises were over-ambitious. It meant that resources were spread too thin and so things which should have been prioritised were not given the necessary resources and people. As a result, construction workers were rushing to finish airports while smaller teams were struggling to complete stadiums.

The result was more embarrassment for FIFA and the host country too. Brazil knew it would be unable to complete the airports but still made this a key deliverable. As part of project management, it is essential that project managers are realistic about what is deliverable and what isn’t.

Achieving this means following the data and information provided by project management software and sharing these findings with stakeholders. With communication central to project management, as discussed in the previous point, project managers can be firm about what can and can’t be accomplished.

The highs and lows of project managing the Women’s World Cup


Held over nine venues, the first few games of the Women’s World Cup 2019 were sold out within 48 hours. As the tournament progresses and the competition heats up, the semi-finals and final are also close to being sold out. Within this large project, qualified project managers will lead multiple projects, all of which bring their own unique set of challenges and rewards.

As yet, the current hosts of the 2019 tournament seem to have learnt from past project management mistakes. The number of venues being used is lower than at Brazil, but there has been criticism of the lack of visible promotion for the tournament, possibly explaining why some matches are not sold out.

On the whole, however, it seems those project managing this year’s Women’s World Cup have got the basics right.

Thursday 20 June 2019

Building a Sound Data Collection Plan

Black Belts and Six Sigma practitioners who are leading DMAIC (Define, Measure, Analyze, Improve, Control) projects should develop a sound data collection plan in order to gather data in the measurement phase. There are several crucial steps that need to be addressed to ensure that the data collection process and measurement systems are stable and reliable. Incorporating these steps into a data collection plan will improve the likelihood that the data and measurements can be used to support the ensuing analysis. What follows is a description of these steps. A checklist, populated with dummy responses, is also provided to illustrate the importance of building a well-defined data collection plan prior to execution.

Six Sigma Study Materials, Six Sigma Certifications, Six Sigma Learning

Three phases – five steps total – are involved in building a sound data collection plan:

Pre-Data Collection Steps


1. Clearly define the goals and objectives of the data collection
2. Reach understanding and agreement on operational definitions and methodology for the data collection plan
3. Ensure data collection (and measurement) repeatability, reproducibility, accuracy and stability

During Collection Steps


4. Follow through with the data collection process

Post-Data Collection Steps


5. Follow through with the results

Step 1: Define Goals And Objectives


A good data collection plan should include:

◈ A brief description of the project
◈ The specific data that is needed
◈ The rationale for collecting the data
◈ What insight the data might provide (to a process being studied) and how it will help the improvement team
◈ What will be done with the data once it has been collected

Being clear on these elements will facilitate the accurate and efficient collection of data.

Step 2: Define Operational Definitions and Methodology


The improvement team should clearly define what data is to be collected and how. It should decide what is to be evaluated and determine how a numerical value will be assigned, so as to facilitate measurement. The team should consider consulting with the customer to see if they are already collecting the same (or similar) data. If so, comparisons can be made and best practices shared. The team should also formulate the scope of the data collection:

◈ How many observations are needed
◈ What time interval should be part of the study
◈ Whether past, present, and future data will be collected
◈ The methodologies that will be employed to record all the data

It is best to obtain complete understanding of and agreement on all the applicable definitions, procedures and guidelines that will be used in the collection of data. Overlooking this step can yield misleading results if members of the improvement team are interpreting loosely defined terms differently when collecting data. Serious problems can arise for the organization when business decisions are made based on this potentially unreliable data.

If the team wishes to examine historical data to include as part of the study, careful attention should be paid to how reliable the data and its source has been, and whether it is advisable to continue using such data. Data that proves to be suspect should be discarded.

Step 3: Ensuring Repeatability, Reproducibility, Accuracy and Stability


The data being collected (and measured) will be repeatable if the same operator is able to reach essentially the same outcome multiple times on one particular item with the same equipment. The data will be reproducible if all the operators who are measuring the same items with the same equipment are reaching essentially the same outcomes. In addition, the degree to which the measurement system is accurate will generally be the difference between an observed average measurement and the associated known standard value. The degree to which the measurement system is stable is generally expressed by the variation resulting from the same operator measuring the same item, with the same equipment, over an extended period.

Improvement teams need to be cognizant of all the possible factors that would cause reductions in repeatability, reproducibility, accuracy and stability – over any length of time – that in turn may render unreliable data. It is good practice to test, perhaps on a small scale, how the data collection and measurements will proceed. It should become apparent upon simulation what the possible factors are, and what could be done to mitigate the effects of the factors or to eliminate the factors altogether.

Step 4: The Data Collection Process


Once the data collection process has been planned and defined, it is best to follow through with the process from start to finish, ensuring that the plan is being executed consistently and accurately. Assuming the Black Belt or project lead has communicated to all the data collectors and participants what is to be collected and the rationale behind it, he or she might need to do additional preparation by reviewing with the team all the applicable definitions, procedures, and guidelines, etc., and checking for universal agreement. This could be followed up with some form of training or demonstration that will further enhance a common understanding of the data collection process as defined in the plan.

It is a good idea that the Black Belt or project lead be present at the commencement of data collection to provide some oversight. This way the participants will know right away whether or not the plan is being followed properly. Failure to oversee the process at its incipient stages might mean that a later-course correction will need to be made, and much of the data collection and/or measurement efforts will be wasted. Depending on the length of time it takes to collect data – and whether the data collection is ongoing – providing periodic oversight will help to ensure that there are no shortcuts taken and that any new participants are properly oriented with the process to preserve consistency.

Step 5: After The Data Collection Process


Referring back to the question of whether or not the data collection and measurement systems are reproducible, repeatable, accurate, and stable, the Black Belt or project lead should check to see that the results (data and measurements) are reasonable and that they meet the criteria. If the results are not meeting the criteria, then the Black Belt or project lead should determine where any breakdowns exist and what to do with any data and/or measurements that are suspect. Reviewing the operational definitions and methodology with the participants should help to clear up any misunderstandings or misinterpretations that may have caused the breakdowns.

Step 6: Sample Populated Data Collection Plan


The text displayed in maroon is example data for illustration purposes only. In order to create your own data collection plan, you should follow the outline provided and reproduce the maroon text with your project specific plan.

Goals And Objectives

Description of the project:

The results of the recent election in our municipality have caused concern over the validity of our vote counting process. Our current law states that a manual recount is required when the vote count differential is less than 0.5 percent. However, neither the manual vote counting process nor the vote counting device have been analyzed to determine their reliability. Such information will be beneficial to the legislature when they convene to discuss the state of our voting process. Therefore, the improvement team has decided to collect some data relating to the vote counting process. They will start the measurement phase with an experiment to determine if the punch-hole type ballots have any tendency to become altered or materially misshaped – such that the outcome (or vote) would change if the same ballot were subjected to a manual recount – as a result of being processed through the vote counting device. This one-factor-at-a-time experiment will explore the possibility that manual recounts, even if proven to be reliable, could give erroneous information if the ballots they receive (as inputs into the manual recount process) from the vote counting device have been altered in some way. Subsequent experiments will examine whether the practice of stacking and binding the punch-hole type ballots after they have been processed through the device would contribute to any alteration of outcomes.

Data to be collected:

Post-feed vote count accuracy.

Name of measure (label or identifier):

Vote count totals from pre-marked ballots after being processed by the vote counting device.

Description of measurement (accuracy, cycle time, etc.):

Accuracy – Comparison of ballot and vote totals pre-and post-feed, giving us a yield.

Purpose of data collection:

Ultimately, the goal is to determine if the reliability of the manual vote counting process and ballot counting devices in our municipality will be consistent with our laws requiring a re-count at a 0.5% threshold.

What insight the data will provide:

The data, when counted and compared with the pre-marked ballot totals prior to processing, should tell us if the ballots are distorted in any way when they are fed through the vote counting device such that the outcome (or vote) is altered.

Type of measure (input, process or output):

Process measure.

Type of data (discrete-attribute, discrete-count or continuous):

Discrete-Count.

How it will help the improvement team:

The team will be able to make a decision on whether to eliminate from consideration the possible effects of the ballots being processed through the vote counting device as a possible factor in the overall reliability of the vote counting system.

What will be done with the data after collection:

The team will use the data to arrive at a process accuracy measure, which may be included in the final rolled throughput yield calculation. The team may also use the data to populate a concentration diagram if vote count inaccuracies seem to congregate in one particular area on the ballot that might indicate an obstruction or force in the device that would cause inaccurate vote counts.

Operational Definitions And Methodology

Who? (roles, responsibilities):

Project lead and process owner will supervise/oversee; each team member will participate in the data collection.

What? (define the measure):

Post-feed vote count accuracy: Inaccurate = Post-feed ballot does not match exactly the outcome (votes) of the same pre-marked ballot at pre-feed.

Where? (source, location):

Data collection will take place at the precinct 9 headquarters. Data analysis will be conducted at the State Capital offices.

Scope:

Sampling plan (number of observations):

1,000 total observations are desired. 250 of them coming at each interval.

When (times, intervals, frequencies):

Data collection to take place every Thursday beginning October 9 from 9 a.m. to 10 a.m. Data collection will cease on October 30th.

Past data:

None available.

Present data:

Data collection to begin October 9.

Future data:

To be determined.

How (methodology):

Post-feed vote count accuracy: A pre-marked ballot containing five names written in magic marker (located in the upper right corner of the ballot) will serve as the actual voter intention and will indicate to the participant who they will vote for (i.e. what hole to punch). The participant will take the pre-marked ballot to voting booth A and punch the appropriate holes. The hole-punching will be observed by the team lead or the process wwner. When all the appropriate holes are punched, the team lead or process owner will record the results as they interpret the punches. The participant will then take the ballot and deposit it into the vote counting device. Once the ballot has been fed into the device and the vote has been registered, it will be collected again by the participant and compared to the original, pre-feed vote at booth B. The team lead or process owner will record the results once again as they interpret the punches in their post-feed form. The process will repeat until the desired number of observations has been met.

How (recording data):

Use the tally sheets provided by the team lead. An inaccurate vote count will receive the numeral zero on the tally sheet and an accurate vote count will be recorded (tallied) as the numeral one.

Data Collection (and Measurement) R&R, Accuracy and Stability

Plan for data collection (and measurement) repeatability:

Not applicable.

Plan for data collection (and measurement) reproducibility:

Not applicable.

Plan for measurement systems accuracy:

Not applicable.

Plan for measurement systems stability:

Not applicable.

Wednesday 19 June 2019

Agile and ITIL: Friends or Foes?

Today, many IT organizations are expanding their IT businesses using ITIL (Information Technology Infrastructure Library) and other valuable industry frameworks for ITSM (IT Service Management). They are focussing on improving their service quality. In addition to quality, companies are trying to build agility, with the emergence of new technology and methodology like Agile Software Development.

Recent reports from ITSM.tools emphasized upon the factors that organizations measure during work in IT industry. The following image shows the statistics of the aspects, as measured by the organizations.


Even after the use of these methodologies and technologies to speed up delivery, IT operations were not able to get on with the fastest delivery rate of IT services. So industries carried out many discussions regarding the combination of ITIL and Agile- Is it possible that both can coexist within an organization? Can ITIL and Agile play major role after merging service quality with agility and speed? Will Agile and ITIL together becomes friends or foes? The article has tried to address this as precisely as possible.

ITIL provides a framework for the governance of IT from the business and customer outlook. ITIL is referred as the best practice framework for IT service management (ITSM). It focuses on continuous measurement and improvement in the quality of the IT services delivered to the customers. According to the ITIL Practitioner course, ITIL includes 9 guiding principles as follows:

◈ Focus on value
◈ Design for experience
◈ Start where you are
◈ Work holistically
◈ Progress iteratively
◈ Observe directly
◈ Be transparent
◈ Collaborate
◈ Keep it simple

Agile is a set of processes for software development which fulfills customer requirements and solutions from the cross-functional teams. Companies need to adopt the key points from the Agile Manifesto to achieve Agile ITSM. The key points are as follows:

◈ Individuals and Interactions over processes and tools
◈ Working Software over comprehensive documentation
◈ Customer Collaboration over contract negotiation
◈ Responding to Change over following a plan.

If these Agile practices are matched with the 9 principles of ITIL, you will find some striking similarities. ‘Working software’ is an equivalent to ‘Focus on value’- which means develop the right things, the valued software can be used by the customers. The ‘Keep it Simple’ principle clearly explains how close ITIL and Agile are! This principle suggests to act quickly and deliver quality, which is the same as ‘Responding to change’. 

One of the main hurdles in the integration of Agile and ITIL is the truth that ITIL follows sequential framework, whereas Agile is an iterative approach where Minimum Viable Products (MVPs) are constructed and updated in a very short period cycle. This may create instability. However, businesses and their clients look for stable and agile IT services.

DevOps can be the solution for it. It is a more endurable approach for bringing these two contrasting approaches to enable stability and agility (Development and Operations), together. DevOps is based on the combination and communication between Development (Dev) and IT Operations (Ops). DevOps provides technical practices to produce a software. The goal behind DevOps technology is to automate an application delivery and workflow of the processes (planning, design, implementation, testing).

In future, there will be a lean, fast and agile IT service management. According to Gene Kim, thought leader and co-author of The Phoenix Project- “Patterns and processes that emerge from DevOps are the inevitable outcome of applying Lean principles to the IT value stream […and] ITSM practitioners are uniquely equipped to help in DevOps initiatives, and create value for the business”.

Essentially, considering the diverse perspectives, Agile and ITIL can exist without some major conflict. Agile and ITIL can very much go hand in hand, because this combination allows IT organizations to have a new culture called, Agile ITSM. ITIL will offer a framework for stable and quality-assured service rapid delivery, whereas DevOps will ensure to provide the continuous stream of improvements. Due to the alliance of Agile/DevOps and ITIL principles, Agile ITSM can provide guidelines for service and the speediest delivery in an Agile way! 

Tuesday 18 June 2019

Brewing a Better Beer with TQM

Recently, a well-established, rapidly expanding beer company invested heavily in a modern, state-of–the-art brewing facility. The new facility dramatically improved quality and productivity, and also reduced costs through the application of new technology. As a next step, the beer company began exploring methods of achieving a further quantum jump in performance. Recognizing that technology and added investment might offer diminishing returns, they decided to explore total quality management (TQM) principles as a means of achieving this ambition in the manufacturing arena.

Six Sigma Certifications, Six Sigma Study Materials, Six Sigma Guides

The resulting project, which involved several Lean Six Sigma analysis tools, was created to demonstrate what sort of benefits TQM and Lean Six Sigma could generate to help the company decide whether to expand such continuous improvement methods across the organization.

Beer Brewing Primer


Before going further into the TQM process, here is a quick description of the basic process used to manufacture beer. The key steps in beer brewing are as follows:

1. Soak malted barley and other ingredients in hot water to make wort, a liquid extract that contains the sugars that will be fermented into alcohol.
2. Boil the wort, add hops as a bittering agent and filter the resulting mixture.
3. Once the wort is cooled, add yeast to ferment, mature and lager the beer in fermentation tanks.
4. Bottle the beer.

Selecting the Theme


To begin, the senior manufacturing management attended a two-day quality mindset program to get an introduction to TQM, understand why and how it works, and, most importantly, to open their minds about exploring change. The group was then asked to brainstorm their priorities and select a theme for the project, which, if successful, could help demonstrate the potential benefits of TQM.

The team selected “Quality Improvement” (QI) as the theme and chose one of their large, modern locations (referred to here as “Factory A”) as the venue for the project. A cross-functional team from Factory A’s management was selected to take part in the two-day quality mindset program.

The project was structured around TQM’s seven-step problem solving method, defined as:

1. Define the problem
2. Conduct root cause analysis
3. Generate countermeasure Ideas
4. Test the ideas and implement in production
5. Check the result
6. Standardize procedures
7. Prepare a QI story

In this case, however, many of the above steps were merged or conducted in a non-linear fashion at various points in the process.

Step 1: Define the Problem


The Factory A team brainstormed and prioritized the key areas of quality that needed dramatic improvement. Eventually, “improving consistency of taste “emerged as the target area. After using a ranking exercise to review the various possible attributes of taste, the team chose “bitterness of beer” as the parameter that would be tackled in this project.

In the units of measurement for bitterness prescribed, the standard was 10*+/-2 units (Note: for confidentiality reasons, 10 is not the actual value but just an arbitrarily assumed base). On examining past quality control (QC) measurements, 75 measurements over one month revealed the following:

Average bitterness (B) = 10.2
Sigma (s) = .029

After seeing these measurements, it was puzzling why the team perceived a need for improvement in consistency of bitterness. The team decided to reevaluate the bitterness measurement system and the sampling scheme.

Measurement: The measurement of bitterness involves four stages: degassing (removal of all carbon dioxide gas from the sample), shaking, centrifuging and measuring. Observation of the measurement process yielded obvious inconsistencies:

1. The degassing was done manually by tipping the beer from one tumbler to another.
2. It was not clear when the beer was degassed enough.
3. The centrifuging was being varied from operator to operator.
4. The specified times in the standard operating procedure (SOP) were not being followed.

Countermeasures: 1) A magnetic stirrer was introduced and the time of degassing was standardized, and 2) the necessity of obeying the SOP timings was emphasized.

The repeatability and reproducibility of the system was checked using gauge R&R analysis and the variation of .07 was deemed satisfactory.

Sampling: Past performance was based upon a sample size of one bottle per shift for a production of 25,000 bottles per hour. The team suggested that workers should check the bitterness much more intensively to confirm performance. An hourly sampling was done for two days. The results are shown in Table 1.

Table 1: Sampling Results

Number of samples  Average Bitterness  Avg.
+ 3s 
Avg.
-3s 
QC sampling 75 10.2 11.1 9.33
Hourly sampling  43  11.2 12.3  8.15 
Control limits  12 

The hourly data showed that the process did not deliver even 3-sigma quality (i.e. 99.7 percent of products within the control limits) over 40 hours. Therefore, it was very unlikely that the process could deliver 6-sigma quality over one month.

An X-bar control chart was developed, as shown in Figure 1.


Hourly Bitterness – Mild Beer Bottle

From these measurements, two important mindset changes were achieved:

1. The process was not perfect, as previously thought; there was room for improvement.
2. The measurements could now be trusted to mirror reality.

Regular plotting of the chart was commenced, and 100 hours was selected to represent the population variation. The average and variation of the first 100 hours of readings measured the current state and helped define the problem, using the TQM formula: problem = desire – current state.

Step 2: Finding Root Causes


The average measurement (10.8) of this batch of beer ranked at the top of the bitterness scale for “mild beer.” Because the high average fell within the overlapping range for the “strong beer” category, the team saw that the beer was going out of range.

Generate and test countermeasure ideas: Reducing the bitterness average required a reduction in hops added by 100 grams per batch. Once this was implemented, the average came down progressively to 10.3 in about 150 hours (see Table 2).

Table 2: Progressive Reduction in Bitterness Over Time

Hours 50-150 75-175  100-200  200-300 
Average bitterness 10.8  10.6 10.4 10.3

As the project progressed gradually, the hops dose was fine-tuned; reducing the hops addition by another 100 grams achieved an average of 10. The project’s first objective had been largely achieved, along with minor cost savings. The stage was now set for the much more difficult task of reducing the variation by 50 percent.

Brainstorming generated a list of possible causes of variation, which were ordered using an Ishikawa, or fishbone, diagram. Most key variables and recipe components were controlled automatically and were remaining within tight limits. Only two processes were manual, and were varying:

1. Preparation of a hops solution and the time of its addition to the wort.
2. Weighing of hops – a balance that needs calibration, cleaning and careful usage.

SOPs were developed for the above two factors and implemented.

The bitterness of the beer develops through the process in three key stages:

1. Wort making
2. Fermentation, maturation and lagering – two three-stage processes carried out continuously in the fermenters.
3. Between fermentation and bottling, there was a change in bitterness.

Regular measurement of bitterness of each wort batch was introduced and the team developed a control chart to review the 3-sigma limits after every 50 batches. The experiential standard for the wort’s average bitterness was 20 (Note: Again, this level is different from the actual value due to data confidentiality reasons).

For the first 50 wort batches, the average bitterness was measured at 21, with a sigma of 1.31. The average was being adjusted in line with the bitterness of beer.

With the basic process stabilizing, control chart plotting was transferred from quality assurance to the shift brewer. A small line team started to meet and question every out of the ordinary peak or trough daily and killed the sources of variation. Gradually, the sigma reduced further and over two months (200 hours) by another 50 percent – from 0.68 to between 0.35 and 0.38.

Step 3: Check the Results


When recording started for this project, the state of wort bitterness was as follows:

Average: 10.7
Sigma: 0.11
3-sigma limits were 10.37 to 11.03

The improvement in wort bitterness occurred in three stages:

Phase 1 

The average wort bitterness improved dramatically, from 10.7 to 10.8. When team members queried why, two causes emerged:

◈One of the ingredients of the recipe was a thick liquid received in drums. The team realized that about 5 kilograms were remaining in each drum and could be removed by washing with hot water. This process was implemented to enhance yield.
Batches of 10.6 (the lower end of the bitterness scale) ceased to appear and were replaced by some 10.8s, hitherto not present as the process gradually standardized for bitterness variation control.

Phase 2

The TQM team then wondered why if one batch could be 10.8, why should the next one be 10.7? Relentless search and elimination of minor causes of variability gradually led to an increasing number of 10.9 readings. By end of Phase 2, the average wort bitterness measurement had moved to 10.85.

Phase 3 

Thereafter, regular reviews and gradually increased tightening of process parameters raised the wort bitterness average further to 10.9. A few batches touched 11, but most were at 10.9 and a few were 10.8. The improvement achieved is summarized in Table 3.

Table 3: Final Project Results

Initial state After change 
Average wort bitterness 10.7   10.9
Sigma 0.11 0.04

These results translated into a major cost savings of $150,000 annually.

In the future, the team may consider measuring the variation ratio of pre- and post-fermentation bitterness to try and make the fermentation process even more consistent.