Episode #023- Human Performance

It is not our workers and operators who need fixing, it is our workplaces.

-Todd Conklin, PHD- The 5 Principles of Human Performance

Hello!

I am interested in focusing on judgment, reason, and/or decision-making processes by workers within work environments. Since the 1979 Three Mile Island Accident and the Space Shuttle Challenger explosion, research has been put into the macro of why organizations have these cultural failures due to poor judgment, reason, and/or decision-making. Safety and leadership of organizations push the causation of unintended outcomes onto the employee’s as “worker error”, freeing the organization from responsibility, thus continuing the downward spiral of organizational culture until tragedy strikes.

Intended and unintended outcomes are directly influenced by the cognitive processes and systems used by workers. To best assist the worker to perform at desired levels and outcomes, tools and resources need to be developed that can be used by workers to prioritize goals and envision the proper result desired.

There is a gap in research and reporting, it lies within the micro of organizations; why do well-trained workers who know the rules and have experience performing the task, employ poor judgment, reason, and/or decision-making and end up having unintended outcomes. Thus, our question here is: What judgment, reason, and/or decision-making processes facilitate the normalization of deviance (drift) of workers?

While investigating existing studies on worker behavior and organizational investigation of incidents, I have found that much time, research and written word exists about the previously mentioned, normalization of deviance, known as drift, as it pertains to organizations.

Prime examples of this include NASA, Enron, Three Mile Island, and the failed design and build of numerous dams, bridges, and tunnels. When issues like these occur, the blame is quickly placed upon the worker(s). In doing so, the organization is released from responsibility and drift continues unabated.

Because of this conventional blame system, there is very little data and research on what system or process led the individual worker to the unintended outcome. It is our intention to develop the study of the decision-making processes of ourselves and our workers, and design systems and processes that assist us to make decisions that end in the intended outcome. And, having systems in place that allow the organization to, if an unintended outcome does occur, can critique the systems, and improve. And fight drift.

This leads us to HP. No, not Hewlett-Packard, in our context, it stands for Human Performance. Human performance has been taken on predominantly within safety programs and the research about it and its implementation is based as such. But the reality is that these ideas are pertinent with C-Suit decision making, mid-level leadership and frontline workers in any industry.

Per the Department of Energy (DOE) Human Performance Standard, Human Performance is defined as:

Human performance improvement (HPI) as addressed in this handbook is not a program per se, such as Six Sigma, Total Quality Management, and the like. Rather, it is a set of concepts and principles associated with a performance model that illustrates the organizational context of human performance. The model contends that human performance is a system that comprises a network of elements that work together to produce repeatable outcomes. The system encompasses organizational factors, job-site conditions, individual behavior, and results. The system approach puts new perspective on human error: it is not a cause of failure, alone, but rather the effect or symptom of deeper trouble in the system. Human error is not random; it is systematically connected to features of people’s tools, the tasks they perform, and the operating environment in which they work. A separate volume, Human Performance Improvement Handbook Volume II: Human Performance Tools for Individuals, Work Teams, and Management, is a companion document to this handbook. That volume describes methods and techniques for catching and reducing errors and locating and eliminating latent organizational weaknesses.

History

The ideas of Human Performance (HP) stem from the post-incident investigations of 1979s Three-Mile Island nuclear power facility. It the Department of Energy almost 30 years but they eventually came up with the ideas and some process of HP found through experimentation, psychology, and testing. Here are the 5 Principles of Human Performance (Conklin,2019):

  1. Error is normal. Even the best people make mistakes.
  2. Blame fixes nothing.
  3. Learning and improving are vital. Learning is deliberate.
  4. Context influences behavior. Systems drive outcomes.
  5. How you respond to failure matters. How leaders act and respond counts.

The Three Modes of Human Performance

Recent research in psychology and management studies has expanded upon Jens Rasmussen’s model for three modes of human performance. Each of these modes describes a set of behaviors and responses underlying how humans perform work.

Understanding these performance modes is the key to understanding human error.

Skills-Based Performance

Skills-based performance (SBP) describes situations in which workers perform a task with little conscious thought. SBP is usually the result of extensive experience with a given operation.

When operating in a skills-based mode, individuals rely on “pre-programmed sequences of behavior” with “little or no allocation of attention resources.”

You can think of SBP as things we do automatically, like riding a bike, typing, or writing by hand.

Knowledge-Based Performance

From its name, knowledge-based performance can easily be misinterpreted.

According to the Department of Energy (DOE) Human Performance Standard, “the situation described as ‘knowledge-based mode’ might better be called ‘lack of knowledge’ mode.” This is because we rely on knowledge-based performance when we don’t know what we’re doing, such as when faced with wholly unfamiliar situations.

In these cases, we rely on our existing knowledge to help us. We look for patterns and apply schema we’ve learned from other tasks to the situation before us.

From our investigation into the Dunning- Kruger Effect in Episode 21, We can see this is a real problem.

Rules-Based Performance

Rules-based performance (RBP) applies when changes in context prevent an individual from relying on skills. In this performance mode, a worker applies written or memorized rules to navigate an unfamiliar situation. If aspects of a situation match a learned skill, the worker will fall back on skill-based behaviors. If not, they will consult external sources.

Another way of thinking of rules-based performance is as sequences of “if-then” decision. If the situation is one way, Then the prescribed behavior follows.

The Current

The current state of safety process and incident investigation is as such.

Human Performance has become an empty buzzword. Here is a great example. Upon a quick Google search of “Human Performance”, a large energy company’s (nuclear division) “Human Performance Program” document popped up.

Under the opening “Purpose” section, Part 1.1, I quote: “Establish a vision of Human Performance Event Free Operations throughout the fleet.” They haven’t even defined HP yet. The first sentence is in direct opposition to the tenets of HP.

This is what happens when incompetent, safety teams are tasked with taking on the implementation of operations tasks. They cannot handle it and they morph any new thing they do not understand into what they do. Thus, failing.

Let’s harken back to Episode 22- Safety, defined.

The majority of safety programs, EH&S programs and just about any other compliance program is what is called, “Behavior-Based.” These types of systems are founded on the assumptions that:

Workers, in this view, are a problem to control. People’s behavior is something to control, something that you must modify (emphasis mine). Leaders of these EHS programs believe you have to start with people’s attitudes because those influence their behavior. EHS tries to shape these attitudes with posters and campaigns and sanctions, which they hope will impact workers’ behavior and reduce their errors (even though there is no scientific evidence that any of this works).

Also described as:

Safety is the absence of negative events. A system is safe if there are no incidents or accidents. The purpose of safety management is to ensure that as little as possible goes wrong. The focus is on negative events and reducing their severity and number. This often translates into trying to reduce the variability and diversity of people’s behavior- to constrain them and get them to adhere to standards (emphasis mine). (Hollnagel, 2014)

I want to reiterate the level of ignorance this Behavior based/Old View of safety is.

In November 2018, I was on an EHS weekly conference call when the Senior VP (top guy) of the EHS department opened with:

“There have been a lot of slips, trips, falls, and turned ankles recently.  We have to control the workers behavior before this gets out of hand.”

Have you ever tried to control a person’s behavior? How did that go?

This behavior in EHS departments is the norm. Someone gets hurt and the investigation into the event stops at “Human Error.”  This is by design, as part of the “Behavior-Based” safety mindset, EHS can remove blaming the organization as being at fault for the incident by saying the injured worker “made a mistake” or is “stupid” or “did not follow policy” or any number of things. But, if they were to truly follow a root cause analysis, they would find the cause of the incident is systematic. Not individualistic.

This Old View of safety, the Behavioral Based one, can also be called the “Bad Apple Theory of safety. It maintains:

Complex systems would be fine, were it not for the erratic behavior of some unreliable people (bad apples) in them.

In the eyes of behavior-based investigations, failures come as unpleasant surprises. They are unexpected and do not belong in the system. Failures are introduced to the system through the inherent unreliability of people.

The behavior-based or what is thought of as Old View set of ideals maintains that safety problems are the result of a few Bad Apples in an otherwise safe system. These Bad Apples don’t always follow the rules, they don’t always watch out carefully. They undermine the organized and engineered system that other people have put in place (People who have no experience with performing the task). This, according to some, creates safety problems.

Here is a paragraph that fits into any industry or workspace. As I read it remember that safety is not just bodily harm, it is also the safety of the organization.

“It is now generally acknowledged that human frailties lie behind the majority of accidents. Although many of these have been anticipated in safety rules, prescriptive procedures, and management treatises, people don’t always do what they are supposed to do. Some employees have negative attitudes to safety which adversely affects their behaviors. This undermines the system of multiple defenses that an organization constructs to prevent injury and incidents.”

This paragraph embodies all of the tenets of the Old View:

Human frailties lie behind the majority of accidents. ‘Human errors’ are the dominant cause of trouble.

Safety rules, prescriptive procedures, and management treatises are supposed to control erratic human behavior.

But this control is undercut by unreliable, unpredictable people who still don’t do what they are supposed to do.

Some Bad Apples have a negative attitude toward safety, which affects their behavior. So not attending to safety is a personal problem, a motivational one, and an issue of individual choice.

The basically safe system, of multiple defenses carefully constructed by the organization, is undermined by erratic or unreliable people.

So, in other words: we are so smart that we have fixed dangerous work, if it weren’t for all these mean, stupid, or accident-prone people doing the work.

This view, the Old View, is limited in its usefulness. In fact, it can be deeply counterproductive. It has been tried for decades, across every industry, without noticeable effect. Safety improvement comes from abandoning the idea that errors are causes and that people are the major threat to otherwise safe systems.

In the paper, “Safety Differently- A New View of Safety Excellence”, Ron Gantt states:

According to the United States Bureau of Labor Statistics (2016), in 2015 the occupational fatal injury rate in the United States was 3.4. This represents a disappointing lack of change over the previous years’ occupational fatal injury rates, with the average occupational fatal injury rate in the United States remaining at 3.4 since 2008, according to analysis of data found on the Bureau of Labor Statistics website. Others within the safety profession have noted this stagnation of fatal incident statistics (Dekker & Pitzer, 2016; Loud, 2016; Manuele, 2013), each noting that progress in safety, as measured by the number of major accidents appears to have plateaued. In a review of fatal injury rates in the United States conducted by the author, although the fatal injury rate has declined 35% from 1994 to 2015, in the last 10 years (2006-2015) the fatal injury rate has only dropped by 15.8%.

This plateauing of fatal injury rates suggests that progress in preventing major accidents may have the features of an asymptote, a line that approaches a curve but never touches it. As Dekker (2015) notes, “asymptotes point to dying strategies” (28). The strategies utilized to achieve progress in preventing major accidents are providing diminishing returns. As a result, calls for new approaches to safety management have grown (Dekker, 2015; Dekker & Pitzer, 2016; Hollnagel, 2014; Hollnagel, Woods, & Leveson, 2006; Loud, 2016; Manuele, 2013).

Old View, behavior-based safety is dying. It has become unsafe, if not outright detrimental to the safety of the worker.

Progress on safety comes from embracing the New View.

The Future

In the years since the release of the DOE document, its data, and information have led to what is called “Safety Differently.” This is the New View of safety.

I will tell you, in advance, that I have implemented the New View Principles. It works wonders!

Here is a quick glimpse of the Safety Differently Principles:

  1. Safety is not defined by the absence of accidents but by the presence of capacity.
  2. Workers aren’t the problem; workers are the problem solvers.
  3. We don’t constrain workers in order to create safety, we ask workers what they need to do to work safely, reliably, and productively.
  4. Safety doesn’t prevent bad things from happening, safety ensures good things happen while workers do work in complex and adaptive work environments.

Two major players in the push towards widespread adoption of Safety Differently are Sydney Dekker and Todd Conklin. I have quoted their works in multiple episodes. Recently, these two teamed up and wrote a book conveniently titled, “Do Safety Differently.”

This book is a primer for implementing the Safety Differently principles in your organization.

In it they discuss how “safety will philosophically change from an outcome to be measured to a capacity that is maintained.”

Chapter titles include:

Do Safety Differently: From Outcome to Capacity.

And

Where there is too Much Compliance: Declutter your Bureaucracy.

It is good stuff!

No matter your industry or your role, you must remember; Humans are fallible. They make mistakes. And tools, resources, and equipment fail. Like it or not.

So, ask your workers how they are getting good, quality, and safe work done so often.  Their insights are valuable and will give you a picture of what work looks like, not what you perceive it to be. This gives you an understanding of your workers’ HP needs and experiences.

But! If you believe that work can get done without events occurring (ZERO Initiatives, another future episode topic), then you will be proven wrong again and again. To keep your job, you will have to falsify investigations, manipulate data and its reporting, and concoct some scapegoat to take the blame off of you. And actually, believe you can control people’s behaviors.

Well, I think we have done a good job of diving into Human Performance. In coming episodes, we will continue to dive deeper into these ideas, their reasoning, and the real-world observations and how we can ensure these processes effects drive us to excellence, and how the effect of well-managed task performance affects our lives, businesses, and organizations.

Links to all the quoted resources are in the show notes and in the transcript on my website, Eddiekillian.com

Join me next Tuesday as we continue to travel the path of what is difficult, perilous, and uncertain as we explore introducing A New Order of Things.

I am your host, Eddie Killian. And this concludes Episode 23.

References

Conklin, T. (2019). The 5 Principles of Human Performance. Santa Fe: PreAccident Media.

Dekker, S. (2011). Drift Into Failure. Boca Raton: CRC Press.

Dekker, S. (2014). The Field Guide to Understanding ‘Human Error’. Boca Raton: CRC Press.

Department of Energy. (2009). Human Performance Improvement Handbook Volume 1: Concepts and Principles. Washington D.C.: Department of Energy.

Department of Energy. (2009). Human Performance Improvement Handbook Volume 2: Human Performance Tools for Individuals, Work Teams, and Management. Washington D.C.: Department of Energy.

Ehrlinger, J., Johnson, K., Banner, M., Dunning, D., & Kruger, J. (2008). Why the Unskilled Are Unaware: Further Explorations of (Absent) Self-Insight Among the Incompetent. National Institute of Health, 98-121.

Fischhoff, B. (1975). Hindsight is not foresight: The effect of outcome knowledge on judgement under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 288-299.

Goldstein, E. B. (2019). Cognitive psychology: Connecting mind, research, and everday experience. Boston: Cengage Learning.

Hayes, A. (2023, March 28). Dunning-Kruger Effect: Meaning and Examples in Finance. Retrieved from Investopedia.com: https://www.investopedia.com/dunning-kruger-effect-7368715

Hollnagel, E. (2014). Safety I and Safety II: The past and future of safety management. Farnham, UK: Ashgate.

Kahneman, D. (2013). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.

Kruger, J., & Dunning, D. (1999). Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134.

Perrow, C. (1999). Normal Accidents; Living with High-Risk Technologies (2nd ed.). Princeton: Princeton University Press.

Peter, L., & Hull, R. (1969). The Peter Principle. New York: Bantam Books.

Vaughn, D. (1997). The Challenger Launch Decision. Chicago: University of Chicago Press.

Leave a comment