Thursday, December 12, 2013

Jack Welch and the Gymnastics of Self-Justification


The following post is a comment on the link below which will take you to an article by Jack and Suzy Welch:


This article reminds me of the process reengineering aftermath when the authors tried to rewrite history on “shooting the wounded,” getting rid of people etc. The authors wrote article after article saying they were misunderstood. Regardless “Process Reengineering” became a synonym for getting rid of people. The ability for self-justification is a human trait that we all possess; intelligent people are even more adapt at this core competency. Welch is obviously practicing in this article.

 Still, the fundamentals are still wrong. Deming’s equation comes to mind:

·         X=Individual

·         Y=System

·         X + [XY] = 8

As Deming would note, “One equation and two unknowns, unsolvable.” In many organizations the system in not under suspicion.  Management merely sets Y = 0 and then attributes the results to X. 

Leadership should be up close and personal. If this is the case, then it will be obvious who is contributing and who is not. We have witnessed several people who have been let go in organizations over the years. Typically, it is obvious to everyone that the person does not fit. Tragically, even though this is apparent to everyone, the separation process goes on way too long for the health and morale of the organization. It is usually very difficult for leaders to admit that they have hired someone who does not fit. Meanwhile the organization suffers, relationships with customers are damaged and the firm still faces the evitable change that must take place.

Deming used to identify “substitutes for leadership.” Yank and Rank was certainly a poor substitute. 



Monday, June 3, 2013

Why is stability of the data so important?

Recently on a popular Six Sigma site the following question appeared:

 "I have a question if I have variable data ( that is not normally distributed) I then transferred it to Attribute data and worked out the DPMO from the opps/defects. If the DPMO is normally distributed can I carry on using stats such at t – tests etc. Or because it is originally attribute data I should use chi squared etc? Any advise appreciated."

From this question, you could run a three day workshop. My short attempt at an answer included:

"As others have said, stay with the continuous data. Before doing anything else put the data on an appropriate control chart and learn from the special causes. As Shewhart noted: things in nature are stable, man made processes are inherently unstable. I have taken this from Shewhart’s postulates. T test and other tests all rest on the assumption of IID; Independent and Identically Distributed. If there are special causes present these assumptions are violated and the tests are useless. Even though the “control chart” show up in DMAIC under C for many novices, it should be used early. Getting the process that produced the data stable is an achievement. It is also where the learning should start. Calculating DPMO, and other outcome measures can come later; after learning and some work. Best, Cliff"

Why the fixation on outcomes, calculating capability, DPMO and the like?  Without any knowledge about stability of the data such calculations are very misleading. In 1989, I sat in a workshop where Dr. W. Edwards Deming made the following comment, "It will take another 60 years before Shewhart's ideas are appreciated." At the time, I thought he was nuts. Control charts were everywhere. Then they disappeared. Now I see Deming as a prophet.

Historically, we are going through a period in improvement science that is not unlike the dark ages. We have people grasping for easy path and quick answers generated by the computer that might as well be "unmanned." Getting the process stable is an achievement! Our first move with statistical software should not be a normality check, but a check of the data to see if we have data that is stable and predictable. If we have such a state, then our quality, costs and productivity are predictable. Without this evidence, we are flying blind.

Thursday, March 14, 2013

We are doing a 4.5 Sigma program?

Dr. Bill Latzko has published a very short and informative paper on the ideas underlying the Six Sigma program. Advocates of Six Sigma often talk about achieving Six Sigma quality meaning 3.4 parts per million. Latzko discusses this and the assumptions in this great paper:

http://www.latzko-associates.com/Publications/SIX_Sig.pdf

Dr. Taguchi idea of reducing variation around a target should be studied and understood by those who are interested improvement. Focusing on meeting specifications has been a step backward that Deming warned us about in his last book: “Conformance to specifications, zero defects, Six Sigma Quality, and all other (specification-based) nostrums all miss the point.”

We can do better.