Tuesday, March 21, 2017

Part 2: Complexity of the Real World Has Outpaced the Myth of the Linear Method – The Need for a Science-Based Methodology Based on Questions.


Part 1 (see link to Part 1 here), presented some information on the use of linear models that typically reflect the ideal journey of a development or learning journey. Unfortunately, the real world typically deviates from this ideal view. What sort of model should we use that would more reflect our learning journey?
Learning from Deming – The Analytic Study
Deming offers some help on distinguishing two types of studies for learning from data and our experience in the real world. He classified studies into two types (1975), depending on the type of action that will be taken[i]:
1.       Enumerative study: one in which action will be taken on the universe that was studied (e.g., conducting a census, or sampling materials for a decision on acceptance or pricing).
2.       Analytic study: one in which action will be taken on a cause system to improve performance of a product, process, or system in the future (e.g., a study to select a future raw-material supplier or using a Shewhart control chart to learn and improve a process).
Note the key word in definition of analytic study – future. For Deming, the problem is prediction. The Foreword to the book, Quality Improvement through Planned Experimentation by Moen, Nolan and Provost (2009), Deming discusses how the results of studies for improvement are used to predict:
Why does anyone make a comparison of two methods, two treatments, two processes, two materials? Why does anyone carry out a test or an experiment? The answer is to predict; to predict whether one of the methods or materials tested will in the future, under a specified range of conditions, perform better than the other one. Prediction is the problem, whether we are talking about applied science, research and development, engineering, or management in industry, education, or government. The question is, what do the data tell us? How do they help us to predict?
What sort of model should we follow that helps us with the problem of prediction? When in doubt we can usually fall back on the methods used in science where questions become our guideposts for learning as we make predictions about the effect of our changes. Questions are powerful.

Science begins and ends in questions, but does not end in the same question in which it began..[ii]
Thinking about how to phrase useful questions is no easy feat. Dennett (2009) has offered a good description of the challenge of getting to useful questions[iii]:
“…anybody who has ever tackled a truly tough problem knows that one of the most difficult tasks is finding the right questions to ask and the right order to ask them in. You have to figure out not only what you don’t know, but what you need to know and don’t need to know, and what you need to know in order to figure out what you need to know, and so forth. The form our questions take opens up some avenues and closes off others, and we don’t want to waste time and energy barking up the wrong trees.”

This quote leads us to some observations about the utility and power of questions as we make changes and improvements:


       When making improvements, questions lay out the journey of learning to develop, test and implement changes.

       A good question can give us the opportunity to view many possible answers. People in the same process have different experiences and perspectives. These differences may produce very different answers and predictions relative to a well-crafted question.
       Curiosity is critical to the crafting of questions. People who are wedded to the current way of doing things can have their thinking impacted as they journey with others on the process of answering the questions. With technical change, social consequences follow, questions help the adult learner with self-discovery, a powerful process for social change.
       Questions expand our thinking and lead to more questions and possibilities for making changes. Answers usually end this process of thinking, discovery, and learning.

PDSA – Learning Engine for Analytic Studies

Moving from linear methods that do not include the need for questions and corresponding predictions to a methodology that is useful for an analytic study is essential. One such methodology is Deming’s PDSA cycle. Deming’s learning journey that led to the PDSA cycle began with his study of C.I. Lewis’s book, Mind and the World Order (1929) and ended with the publication of the PDSA cycle in Deming’s book, The New Economics (1994). Deming honored Dr. Walter Shewhart by calling the PDSA cycle, The Shewhart Cycle for Learning and Improvement[iv]. Deming’s journey of learning that led to the PDSA cycle was documented by Moen and Norman (2009) in the paper, Circling Back: Clearing up myths about the Deming cycle and Seeing How it Keeps Evolving[v].

Associates in Process Improvement (API) in 1994 added the rigor and importance of questions in the Plan phase of the PDSA cycle[vi]. Figure 1 describes this contribution:

Figure 1: PDSA Cycle


When developing questions for a PDSA cycle, and individual or team is sometimes tempted to utilize yes/no type questions. Table 1 describes moving the yes/no type question to an inquiry based question[vii].


Table 1: Moving the Yes/No Question to an Inquiry Type Question
Why is it important to pose the inquiry based question? When we convene a team of people to make an improvement, these people have had different experiences and interpretations in the system. More importantly, they may come from very different parts of the system. If a yes/no type question is posed, people could all answer yes or no for various reasons, based on their experiences while making predictions and miss the opportunities for sharing their experiences and interpretations. We surface these different experiences during the act of making predictions and explain our theories behind our predictions. An inquiry type question forces the sharing of different perspectives during the act of making predictions, which in turn could change how we collect data or carry out a test. Consider the example:

Using an exercise to teach the use of the PDSA cycle, questions were developed and people were asked to predict what impact exercise had on blood pressure. A test was then carried out by having people run up and down stairs, then collecting the data on the different blood pressures. As predictions were then discussed, a nurse posed the theory that women generally have higher blood pressures than men. This led to the need to stratify the data into male and female categories to test this theory. During the Do part of the PDSA cycle, we observed that men, some with beer bellies, were huffing and puffing during the brief experiment. The women in the experiment were all doing very well and is was obvious that the women had been working out. During the Study, the theory of women having higher blood pressures than men was updated with the caveat, it depends on the health and physical shape of those in the experiment. Act – In the future, further stratify the data by history of physical exercise and health. 

API adopted the PDSA cycle as an engine to drive learning. The Plan includes a plan for data collection. In Do, data are plotted during the collection or test providing information. During Study, the predictions are considered relative to the information collected in Do, creating knowledge. Action is then based on knowledge.

Built into the journey of moving from data to knowledge in the PDSA cycle, is idea of deductive and inductive learning[viii]. From Plan to Do is the deductive approach. A theory is tested with the aid of a prediction. In the Do phase, observations are made and departures from the predictions are noted. From Do to Study, the inductive learning process takes place. Gaps and surprises (anomalies) to the prediction are studied and the theory is updated or thrown out as needed. Action is then taken on the new learning to improve our ability to predict, the essence of the analytic approach. Figure 2 describes this iterative process of learning.
Figure 2: PDSA – The Iterative Nature of Learning and Improvement

 Model for Improvement and PDSA

The PDSA cycle is designed around a very tactical idea of developing, testing, and implementing proven changes. The Model for Improvement uses three broad questions to frame a project. The PDSA cycle is driven by the third question, What change can we make that will result in improvement? The Model for Improvement uses three broad questions to frame a project. The PDSA cycle is driven by the question, What change can we make that will result in improvement? which allows iterative PDSA’s for learning from changes.  Often the PDSA is misused by trying to frame a whole project using a single large PDSA instead of multiple, iterative PDSA’s. The latter are far more effective for learning the impact of changes as we work toward the overall objective of the project, which is captured in the first question of the Model, What are we trying to accomplish?  The second question, how do we know a change is an improvement?, provides the measures which tell us from our changes whether we are accomplishing our objective framed by question one of the Model.  Figure 3 describes the Model for Improvement.

Figure 3: Model for Improvement – Three Questions and PDSA Cycle

 Conclusion

Part 1 presented some information on the use of linear models that typically reflect the ideal journey of a development or learning journey. Unfortunately, the real world typically deviates from this ideal view.

·         The work of Dr. Jeff Conklin described how two developers following the same model did not perceive the world in the same way at the same time. One saw a problem, the other a solution.

·         Margaret Wheatley has observed that the fixation on linear models persists because people merely match their real-world results to model given. The myth of the linear model working continues.

·         The distinction between complicated and complex systems:  Complicated things can be figured out with time, like how a calculator works. Complexity operates in open systems, the dynamics of which are changing as well as the experience of the observer. This idea was captured in the quote from Heraclitus: It is impossible to step into the same river twice.

Part 2 has presented Deming’s distinction between analytic and enumerative studies. As Deming observed, “the problem is prediction.” In moving from a linear model to a model that will be useful for addressing the dynamics of the analytic environment we are better prepared to deal with the discoveries and challenges presented by the real world.  

In recent years, various models for improvement have been developed. Some with a focus on using various improvement tools in some prescribed order. This practice usually leads to a waste of valuable time applying tools rather than targeting the learning around useful questions and making improvements –quickly.  The good news, people are becoming aware of prescriptive models that emphasize the use of tools rather than focusing on improvement[ix]. Improvement professionals should encourage:

·         The use of methods that are useful in analytic studies. How do these methods lead to questions and enhance our ability to predict?

·         A method that employs questions that lead to more appropriate data collection and tests of changes.

·         Testing as the foundation of science and critical to carrying out an analytic study. Models that put an emphasis on implementation without adequate testing should be avoided.

·         The use of the scientific method which includes the ideas of deductive and inductive learning. Models that have the scientific method built in should be encouraged. PDSA was developed with this idea in mind.  

·         Avoidance of focusing on the use of tools rather than learning and improvement.

Finally, I am reminded of a conversation with a very bright individual who had led several successful improvement efforts: “Cliff, following an array of tools is easy. Developing the necessary questions is hard.” I couldn’t agree more. Thinking is very difficult.  Being involved in the activity trap of using tools creates excitement and the illusion of progress. Ultimately, the management will be asking the question, “What have you done for me lately?” If asked this question, you will want to show a list of improvements that were executed quickly. If forced to show all the activity and no results, this is called a reduction in force. Far too many improvement professionals were forced out of a job in the last recession. President Truman observed, “History does not repeat itself, we just keep on making the same mistakes.” Time to get focused on the science of improvement and the questions that drive tackling the analytic problem and our ability to impact the future.


References:




[i]     Moen, Ronald; Nolan, Thomas; Provost, Lloyd. Quality Improvement Through Planned Experimentation 3/E McGraw-Hill (2009)
[ii]     Six Easy Pieces: Essentials of Physics Explained by Its Most Brilliant Teacher Audible – Abridged
Richard P. Feynman, ©1963, 1989, 1995 The California Institute of Technology; (P)2005 Perseus Publishing. Note: I remember hearing these lectures and have attributed the quote to Feynman here, but have been unable to really nail it down. Would appreciate a more accurate attribution. Please contact me: cnorman@apiweb.org
[iii]    Breaking the Spell – Religion as a Natural Phenomenon, Daniel C. Dennett (2006, Viking Press, p. 19)
[iv]    W. Edwards Deming. The New Economics for Industry, Government, Education, Second Edition (p. 132). Published by MIT (1994).
[v]     Moen, R.D. Norman, C.L. Circling Back: Clearing up myths about the Deming cycle and Seeing How it Keeps Evolving, Quality Progress, American Society for Quality, November, 2010
[vi]    Langley,G. J., Nolan, K. M., Nolan, T. W., 1994. "The Foundation of Improvement." Quality Progress, ASQC, June,1994, pp. 81-86.
[vii]   Langley, G. J., Moen, R. D., Nolan, K. M., Nolan, T. W., Norman, C. L., Provost, L. P., 2009. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance (JOSSEY-BASS BUSINESS & MANAGEMENT SERIES)
[viii]   Langley, Gerald J.; Moen, Ronald D.; Nolan, Kevin M.; Nolan, Thomas W.; Norman, Clifford L.; Provost, Lloyd P. (2009) The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. Wiley Publishing.
[ix]    RIP SIX SIGMA !!!!!, Published on March 2, 2017 by Satyarth Pandey, Linkedin Post

Wednesday, February 15, 2017

Part 1: Complexity of the Real World Has Outpaced the Myth of the Linear Method

Dr. Russel Ackoff spoke often of how we were prepared to deal with the real world in school. We were usually presented with a “case study.” We would then busy ourselves in reading and developing ideas around the case study. We would then turn in or present the case to the teacher for a grade. Ackoff noted that in the real world, problems do not come in the form a case study, but as a “mess.” Part of our journey is understanding the mess and developing a statement of the challenge.  Laurence J. Peter warns us in the quote:
“Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them.”
The challenge before most of us trying to make improvements is the complexity of environment in which we are working. Systems that are not closed but open systems that are dynamic accompanied by influences for which we may or may not be aware. Ramo (2016) draws the distinction between complicated and complex systems:
Complicated mechanisms can be designed, predicted, and controlled. Jet engines, artificial hearts, and your calculator are complicated in this sense. They may contain billions of interacting parts, but they can be laid out and repeatedly, predictably made and used. They don’t change. Complex systems, by contrast, can’t be so precisely engineered. They are hard to fully control. Human immunology is complex in this sense. The World Wide Web is complex.
Complicated systems have the property of being closed systems, while complex systems operate in an open environment that is very dynamic. Many of us working to make improvements in our organizations can relate readily to the idea of complex system. Into this fray, many of us are given problem solving methods that are linear. We are presented with the idea that if we follow the method, we will be led to a solution. Dr. Jeff Conklin presented some research around the so-called “Waterfall” method commonly used to develop software. Figure 1 describes this method:
Figure 1: Waterfall Method of Problem Solving

In this method, we are to gather data, analyze the data, formulate a solution and implement the solution. How does the real world react to this linear path? Conklin then presented the experience of one designer following the method. Figure 2 describes how the perception of the designer vacillates from problem to solution over the course of using this method:
Figure 2: Waterfall Method with One Designer

Conklin referred to this vacillation as a “wicked journey.” If you have worked on an improvement effort of any complexity you can appreciate this journey. One day filled with hope and a solution, the next day, frustrated by an unintended consequence of your change, you are now faced with the challenge to adapt to the new circumstances you are facing. More work to do!
Life would be good if we could handle complex challenges by ourselves (the lone designer) as in Figure 2. Complex challenges usually require subject matter knowledge of other people. What happens as we add other people? Do they share our perceptions of the problem and solution? Figure 3 describes the journey with two designers:
Figure 3: Waterfall Method with Two Designers

From Figure 3, we can readily see the perceptions between the two designers track at times and are very different at other times. For improvement teams, we usually have 3-5 people on a team. Conklin refers to this addition of people as “social complexity.” Personality intelligence tells us that people are very different. Their perceptions of the same events, data, etc. may be very different given how they learn and their subject matter knowledge.
We had an improvement team in an international tech company that used a method called Understanding, Develop Changes, Test Changes and Implement Changes (UDTI). Within each of the defined phases the Plan-Do-Study-Act (PDSA) cycles were utilized. The team referred to their journey as “wicked.” Figure 4 describes this team’s journey:
Figure 4: Using UDTI with PDSA Cycles to Make an Improvement

In following the PDSA cycles in the figure, you can imagine the frustration of the team in PDSA 12 when a rush to implementation led to failure and required a visit back to the “Understand” phase. After this learning, testing was always done before implementation. The six cycles of implementation at the end were the spread of known changes to other regional groups.
What is the downside of the vacillation between the stages of the linear method? When an improvement team discovers an unintended consequence of a test, they must go back to a prior stage of the linear model, many see this as a failure.  Experienced people with improvement efforts understand that when addressing complex challenges, learning and unlearning are natural parts of the journey. However, the same organization that used the UDTI method had one team in Europe that eliminated all the failed PDSA cycles of the improvement journey, forcing a perfect match to the method. Unfortunately, this sort of practice, while helping self-esteem has nothing to do with the science of improvement.
People who use such linear methods often discuss the vacillation of the hopeful path. One of my colleagues is looking for the first project of any complexity that follows such a method. So far, we have not found one. Margaret Wheatley once commented on why the myth of success with linear methods continues: “After the fact, people usually report their journey by the prescribed method, thereby reinforcing their use.” Dr. Jeff Conklin and the UDTI team have given us some insight into use of such methods as they encounter a world of complexity. Hopefully, we won’t be surprised when the real world does not cooperate.
In Part 2, we will examine some methods based on the science of improvement. The importance of questions in addressing complex systems and help in addressing the social consequences of technical change.

References:
1.       Conklin, Jeff, Wicked Problems and Social Complexity (2008); This paper is Chapter 1 of Dialogue Mapping: Building Shared Understanding of Wicked Problems, by Jeff Conklin, Ph.D., Wiley, October 2005. For more information see the CogNexus Institute website http://www.cognexus.org. © 2001-2008 CogNexus Institute. Rev. Oct 2008.
2.       Ramo, Joshua Cooper. The Seventh Sense: Power, Fortune, and Survival in the Age of Networks (p. 137). Little, Brown and Company.
3.       Leadership and the New Science, Margaret Wheatley, Berrett Koehler Publishers, San Francisco, 1992. Note: In searching this book, we were not able to locate the quote from Wheatley. In communication with her staff, we were told to attribute. The reader may find this reference useful. Find Part 2 here


Acknowledgment
      Jesse Trevino and Jane Norman offered comments to make this more readable. Many thanks. Ron Moen added suggestions for including Deming’s concepts of enumerative and analytics studies. Bruce Boles, Jonathan Merrill and Dave Hearn reviewed. Many thanks to all for the help!

Thursday, February 2, 2017

Time to Retire the 16th Century Root Cause Phrase and Thinking  


Systems thinking has destroyed the idea of single cause thinking from the 16th century. Systems thinking has been on a roll since Bertalanffy wrote General Systems Theory in 1968. In spite of systems thinking, the use of "root cause" phrase persists.

Psychologically, there is an upside and downside of the phrase “Root Cause Analysis (RCA).” The upside, the illusion of single cause thinking gives people hope. It sends the message that one thing is going on and they can handle that. On the downside, the mental image of a “root cause,” leads people to finding a cause. I once watched in horror as a Master Black Belt (MBB) led a group of engineers in a high-tech company through a multi-voting exercise on an Ishikawa Diagram. Once the MBB had all the votes, they focused on the “top cause.” There was a short plan put together to investigate this single "cause."  Since I was visiting, I was silent until someone asked me what I thought. I asked a question about the possible covariance of factors for the application being discussed. After one engineer that the factors do indeed interact, they got back to reality. Rather than one factor, they needed to consider multiple factors in a designed experiment.

There is hope. People are waking up! In 2015, The National Patient Safety Foundation exposed many of the problems with the myth of "Root Cause Analysis:" From the report:

"RCA itself is problematic and does not describe the activity’s intended purpose. First, the term implies that there is one root cause, which is counter to the fact that health care is complex and that there are generally many contributing factors that must be considered in understanding why an event occurred. In light of this complexity, there is seldom one magic bullet that will address the various hazards and systems vulnerabilities, which means that there generally needs to be more than one corrective action. Second, the term RCA only identifies its purpose as analysis, which is clearly not its only or principal objective, as evidenced by existing regulatory requirements for what an RCA is to accomplish. The ultimate purpose of an RCA is to identify hazards and systems vulnerabilities so that action scan be taken that improve patient safety by preventing future harm.

The term RCA also seems to violate the Chinese proverb “The beginning of wisdom is to call things by their right names,” and this may itself be part of the underlying reason why the effectiveness of RCAs is so variable. While it might be better not to use the term RCA, it is so imbedded in the patient safety culture that completely renaming the process could cause confusion."


The last line is tragic, unlearning is usually the first step in learning for many (some try to avoid it at all costs). From this line, the authors are in effect protecting people from learning. Cognitive Dissonance is a natural part of how we learn, adapt and change. The paper on RCA2 can be found here:

http://c.ymcdn.com/sites/www.npsf.org/resource/resmgr/PDF/RCA2_v2-online-pub_010816.pdf

The effort to restore systems thinking and 21st century science continued in February, 2017 with publication by Kiran Gupta, MD, MPH, and Audrey Lyndon, PhD, entitled Rethinking Root Cause Analysis. This paper has some great tables that describe the various problems associated with RCA. The authors are working with reference to 2015 paper referenced before. Their paper can be found here:

https://psnet.ahrq.gov/perspectives/perspective/216


References:





Friday, September 18, 2015


Designing Structure to Help People Prevent Errors

Are we making slips or mistakes?

Dr. Donald Norman (1988) presented two fundamental categories of errors that people make while interacting with work processes and systems:

  • Slips: …result from automatic behavior, when subconscious actions that are intended to satisfy our goals get waylaid en route.
  • Mistakes: …result from conscious deliberations
Norman warns us that the same mental processes that make us creative and insightful by allowing us to see relationships between apparently unrelated things, that let us leap to correct conclusions on the basis of partial or even faulty evidence, also lead to error.

Slips

All of us experience making slips each day. In fact the more skilled we are the more prone we are to simple slips. For example, when learning how to drive, we usually do not pull out in front of another vehicle without really paying attention. All of us as skilled drivers have probably experienced the horror of looking quickly and then pulling from the curb and almost being hit by another driver. People learning to drive will look two or three times, paying great attention and then only pull from the curb when the “coast is clear.” Slips are almost always small things and can be readily fixed. They are generally more embarrassing than dangerous. Norman has defined six types of slips:

 Types of slips:

  1. Capture errors
  2. Description errors
  3. Data-driven errors
  4. Associative activation errors
  5. Loss-of-activation errors
  6. Mode errors
Let’s consider each type of slip and an example in real life:

Capture Errors - A frequently done activity suddenly takes charge instead of the one you intended. The capture error usually happens when two different actions sequences have their initial stages in common. For example, it is a nice Saturday morning and your spouse asks you to drive to the store and pick up some orange juice for breakfast. You pull out of the driveway and start on your journey. The next thing you know you are pulling up to park at your workplace! The routine of driving to work has captured your intended action of driving to the store.

Description Errors - The intended action has much in common with others that are possible. As a result, unless the action sequence is completely and precisely specified, the intended action might fit several possibilities. For example, many of us who travel a lot have had the experience of coming from a client meeting, finding our rental car and then after some difficulty of attempting to get into the car, find we have the wrong car. Apparently, Ford has made several white Ford Taurus vehicles. Much to our embarrassment, we then start looking for the car we rented that morning.

 Data-Driven Errors - Automatic actions are data-driven, triggered by the arrival of the sensory data. But sometimes data-driven activities can intrude into an ongoing action sequence, causing behavior that was not intended. For example, while checking into a hotel, the clerk took a phone call from another guest who was calling for a wake-up call. The clerk then got off the phone and informed me that I would be in room 630. I mentioned to him, that earlier he said I would be on the first floor. He then confessed that he just took a call to wake a guest at 6:30 and had confused it with my room number.  I would be in room 110.

Associative Activation Errors
As we have noted, external data can trigger actions. Internal thoughts and associations can also lead to slips. For example, while watching TV the telephone rings on the show I am watching; not realizing the telephone ring is from the television show, I pick up my own phone and greet a dial tone! Associative activation errors can also come from associations between thoughts and ideas.  When we are in a social situation and are thinking we should not say something that might be inappropriate, and then to our shock and surprise we say it!

Loss-Of-Activation Errors

Many of us have committed this error and attributed to ourselves that age must be taking it toll on our mental abilities. Lack-of-activations errors occur because the presumed mechanism -- the activation of the goals, has decayed. For example, we walk from one room to another. Finding ourselves in the room, we are suddenly struck with the reality; why are we in this room? Looking around frantically, we cannot remember why we made the trip. We then go back to the room from whence we came to be “activated.” Looking around, we suddenly remember and then will renewed determination return to the other room to complete the action.

Mode Errors

Mode errors occur when devices have different modes of operation, and the action appropriate for one mode has different meanings in other modes. For example, on Microsoft Outlook I can make distribution lists to send certain types of communication to identified groups of people. A fantastic time saver! Periodically, I find it necessary to edit this list. When I select the entry to be deleted, I read from left to right and find the delete button and click. To my surprise, I have now deleted the entire list! The correct button is further to the right under “Members.” See Figure 1 for the example.
 
Figure 1: Mode Error Example on Outlook 
 
  In the next installments of this conversation we will cover the following topics:
 
    • Design lessons from the study of slips
    • More on the nature of mistakes
    • Creating structure to help people avoid slips and mistakes


Reference:

  1. Psychology of Everyday Things, Donald A. Norman, Basic Books, 1988