Monday, June 3, 2013

Why is stability of the data so important?

Recently on a popular Six Sigma site the following question appeared:

 "I have a question if I have variable data ( that is not normally distributed) I then transferred it to Attribute data and worked out the DPMO from the opps/defects. If the DPMO is normally distributed can I carry on using stats such at t – tests etc. Or because it is originally attribute data I should use chi squared etc? Any advise appreciated."

From this question, you could run a three day workshop. My short attempt at an answer included:

"As others have said, stay with the continuous data. Before doing anything else put the data on an appropriate control chart and learn from the special causes. As Shewhart noted: things in nature are stable, man made processes are inherently unstable. I have taken this from Shewhart’s postulates. T test and other tests all rest on the assumption of IID; Independent and Identically Distributed. If there are special causes present these assumptions are violated and the tests are useless. Even though the “control chart” show up in DMAIC under C for many novices, it should be used early. Getting the process that produced the data stable is an achievement. It is also where the learning should start. Calculating DPMO, and other outcome measures can come later; after learning and some work. Best, Cliff"

Why the fixation on outcomes, calculating capability, DPMO and the like?  Without any knowledge about stability of the data such calculations are very misleading. In 1989, I sat in a workshop where Dr. W. Edwards Deming made the following comment, "It will take another 60 years before Shewhart's ideas are appreciated." At the time, I thought he was nuts. Control charts were everywhere. Then they disappeared. Now I see Deming as a prophet.

Historically, we are going through a period in improvement science that is not unlike the dark ages. We have people grasping for easy path and quick answers generated by the computer that might as well be "unmanned." Getting the process stable is an achievement! Our first move with statistical software should not be a normality check, but a check of the data to see if we have data that is stable and predictable. If we have such a state, then our quality, costs and productivity are predictable. Without this evidence, we are flying blind.

Thursday, March 14, 2013

We are doing a 4.5 Sigma program?

Dr. Bill Latzko has published a very short and informative paper on the ideas underlying the Six Sigma program. Advocates of Six Sigma often talk about achieving Six Sigma quality meaning 3.4 parts per million. Latzko discusses this and the assumptions in this great paper:

http://www.latzko-associates.com/Publications/SIX_Sig.pdf

Dr. Taguchi idea of reducing variation around a target should be studied and understood by those who are interested improvement. Focusing on meeting specifications has been a step backward that Deming warned us about in his last book: “Conformance to specifications, zero defects, Six Sigma Quality, and all other (specification-based) nostrums all miss the point.”

We can do better.

Monday, March 5, 2012

Using Planned Experiments to Accelerate Learning and Improvement

How can we speed up our learning and make improvements that will help us reduce costs? Typically, people making improvements try to change one thing at a time. The problem with this approach is the complexity of the processes and the interdependent factors we are studying. To be more effective, we need to learn how to test more than one factor at a time. Let’s consider an example. 

An improvement team in a hospital system that is supported with citizens’ taxes is attempting to help patients not miss appointments. When someone does not show up for an appointment, this is a loss to the society. The improvement team tried texting the patients one day in advance. This did not seem to make much impact. It was then decided to have a person call the patient personally one day in advance. An improvement advisor happened to overhear this discussion and suggested that the team test two factors at two levels or sometimes referred to as a design which will require 4 runs or tests. Rather than abandon the text idea, test it with the call idea, but add in a lead time factor with a call 1 day and 3 days in advance. The improvement advisor suggested calling three days in advance might allow the patients to plan better. Figure 1 describes the simple designed experiment matrix that was developed. Also included are the percent of no shows as the response variable. 

Figure 1: Design Matrix for a Two Factor at Two Level Experiment
 
 From Figure 1, you can see readily that the 4th test that combined the call with 3 days reduced the no show rate to 10%. The next best combination was a text messages at 3 days in advance of the appointment with a 12% no show rate. The team decided to avoid the cost of taking up a person’s time to actually call patients and use the 3 day advance text. One team member suggested setting the computer to also give a 7 day warning. The team agreed to use another test to follow this idea up. The response plots in Figure 2 describes the results from the 4 test runs.

Figure 2: Response Plots for the Factorial  design

 Before the improvement advisor suggested the designed experiment, the team was ready to run another one factor test: the more expensive idea of adding more people to make personal calls. This would have reduced the no show rate from 25% to 15%, a large improvement but at a significant cost increase. By experimenting with the text and call idea with different frequency levels the team was able to improve while lowering costs.

References:
1.      Quality Improvement through Planned Experimentation, Ronald Moen, Thomas Nolan, and Lloyd Provost, McGraw-Hill, NY, second edition, 1999

Thursday, February 23, 2012

Zero Defects and Six Sigma



This article begins with a quote from Deming in the New Economics: “Conformance to specifications, zero defects, Six Sigma Quality, and all other (specification-based) nostrums all miss the point.”

In  addition a good history of the Crosby contributions to Zero Defects and quotes from Dr. Juran on Six Sigma are included. The paper finishes with the Taguchi Loss Function and Deming's System of Profound Knowledge.

Lost to history is the experience of Ford with the Batavia Plant where they were making exactly the same transmission as Mazda. This provides a real life example of the difference focusing on meeting specification versus manufacturing to target (Taguchi's idea). Both outputs from Mazda and Ford were within specification. However, the Ford transmissions had twice the customer problems as the Mazda transmissions. VP John Beti famously remarked, "While we were busy meeting specifications, they were busy making them all the same." If you have never seen this video produced by Ford, it is worth the time. It is less than 12 minutes long.


Ford at first elected to close the Batavia plant. This caused an outbreak of cooperation between the management and the union. It took the plant 8 months to match the quality of Mazda.

Harry Truman once remarked, "History does not repeat itself. We just keep making the same stupid mistakes."

Deming warned us in 1993 before he passed: : “Conformance to specifications, zero defects, Six Sigma Quality, and all other (specification-based) nostrums all miss the point.” Since his passing we now have a world wide movement that is back to focusing on specifications.