Human Factors and Unsafe Acts: SHELL MODEL and HFACS

by Raccoon Psychologist
Human Factors SHELL MODEL and HFACS

Why Discussing Safety Needs to Include Human Factors?

Looking back at the history of aviation safety, the 1970s marked a pivotal moment. Previously, the belief was that “mechanical explanations accounted for aviation accidents.” However, this changed with the significant aviation accident where two 747s collided on the same runway in Tenerife. Since this incident couldn’t be explained by any mechanical factors, the focus of safety management quickly shifted to exploring human factors and developing ways to manage them. Human factors is an interdisciplinary field concerned with issues related to people in operational environments. It integrates human factors engineering, human physiology, cognitive psychology, and more. Ultimately, it is always human behavior that leads to an accident. Therefore, human factors particularly focus on unsafe human behaviors.

How important are human factors that they cannot be ignored when discussing safety and the unsafe behaviors they cause? A study analyzed 523 incidents in the Taiwanese Air Force and identified 1762 human factors; among them were 725 unsafe behaviors, accounting for 41.1% of all human factors. Of these, errors in technical operation occurred 226 times (43.2%), decision-making errors 223 times (42.6%), perception errors 116 times (22.2%), and violations 160 times (30.6%) (Hsu, et al., 2010). This data shows that on average, each incident involves approximately 1.3 unsafe behaviors, and technical operations and decision-making are the main unsafe behaviors. This indicates that unsafe behaviors related to human factors are significant hazards, and managing safety requires appropriate management of human factors and unsafe behaviors.



Definition of Human Factors

Since human factors is an interdisciplinary field, definitions may vary among scholars and manuals. Let’s start by examining how the ICAO 9859 Safety Management Manual addresses human factors:

Human factors is about: understanding the ways in which people interact with the world, their capabilities and limitations, and influencing human activity to improve the way people do their work. (ICAO 9859 Safety Management Manual, 4th ed.)

In Zhang Youheng (2016)’s Aviation Safety Management, another definition from the FAA is listed, which we can also refer to:

Human Factor is a multi-disciplinary effort to generate and compile information about human capabilities and limitations and apply that information to equipment, systems, facilities, procedures, jobs, environments, training, and personnel management for safe, comfortable, and effective human performance.

Apart from confirming it as an interdisciplinary field, you can observe from the above definitions that the issues concerning human factors include at least two key points:

  • How people interact with the world. This involves how the interface between people and the external world operates during interaction.
  • Human capabilities and limitations. Thus, some human factors courses are not called Human Factor but “Human Performance & Limitation”.

These two core focuses define the scope of discussion for human factors. To understand “how humans interact with the world” and further focus on these human factors for control, the SHELL MODEL is an analytical framework attempting to address this issue.



SHELL MODEL

SHELL Model
Illustration of SHELL MODEL. Sourced from ICAO DOC 9859 4th ed.

The SHELL MODEL was developed around the 1970s, aiming to understand “how humans interact with their world.” Since it’s about “humans” interacting with the “world,” this combination has several distinct characteristics:

  • “Human” is at the center of the world, so the middle L stands for Liveware, referring to the core focus on humans.
  • In the SHELL MODEL, the composition of the “world” includes: Hardware (H), Software (S), Environment (E), and other humans (Liveware, L).

If the focus is on cockpit crew operating aircraft, the four aspects of the “world” in the SHELL MODEL can be exemplified in the aviation context as:

  • Hardware (H): For example, the aircraft’s hardware. Aircraft equipped with a Ground Proximity Warning System (GPWS) offer different levels of assistance and alerts to cockpit crew compared to those without. Without GPWS, the crew may heavily rely on visual awareness for terrain separation, while those with GPWS receive system alerts.
  • Software (S): For instance, the aircraft’s software policy. When the weather is poor, the aircraft can approach, but how many attempts are allowed if the stable approach conditions are not met before a go-around is necessary? This concerns procedural issues.
  • Environment (E): For example, adverse weather conditions. Handling conditions like typhoons, crosswinds, volcanic ash, or low visibility differs from normal, clear weather conditions, affecting how humans interact with the environment.
  • Other People (L): For example, other cockpit crew members, air traffic control, cabin crew, and other operational personnel. The collaboration between the cockpit crew and ATC might lead to different outcomes due to various communication styles and potential misunderstandings.

The Key of SHELL MODEL is the Interaction Interface

At first glance, SHELL MODEL seems to state the obvious: distinguishing the human’s presence in the world is part of our daily life, and nothing seems particularly magical. However, revisiting the SHELL MODEL diagram, there’s a subtle yet crucial detail: the interface between “humans” and the “interacting world” is not a straight line but a curve. The critical aspect is the interface.

What does an interface mean? It’s about how “because of the world’s… situation,” “humans are affected.” For example:

  • Due to taking off during a typhoon, crew members must constantly monitor weather forecasts to ensure they meet safety standards, consuming significant attention resources and leading to failure in confirming the correct runway alignment.
  • Because of the Japanese ATC’s English accent, crew members repeatedly struggle to understand, feeling embarrassed to admit their confusion, resulting in communication issues.
  • Due to equipment malfunctions on the aircraft, it does not meet the criteria for entering RVSM airspace, requiring the crew to choose a different usable altitude.

In SHELL MODEL analysis, it’s not just about identifying the “characteristics” of the world (hardware, software, environment, and other people), but rather how these “characteristics” interact with humans. If this isn’t remembered, just recall that in SHELL MODEL, “humans” interacting with the environment is represented as a “curve”: illustrating what this “curve” means is essential.


Why is SHELL MODEL Not Sufficient Used for Safety Management?

Since the SHELL MODEL thoroughly analyzes how humans interact with the world, one might expect that human factor risks could be entirely identified and appropriately managed. However, the answer might disappoint: SHELL MODEL now appears more as an “educational model” used within human factors courses to demonstrate potential sources of influence but is not currently a practical solution for safety management.

Why? Based on my experience, the reasons are as follows:

  • SHELL MODEL analysis often can only be explained retrospectively, not proactively. For example, strong winds and bad weather during a typhoon are known challenging conditions. However, how does one predict that, knowing the bad weather, the aircraft might end up on the wrong runway, and pressure from the weather might lead the crew to ignore alerts of incorrect runway alignment and takeoff from there? It’s impossible to explain this beforehand.
  • Even if SHELL MODEL can be explained retrospectively, could it not serve as a statistical indicator? By counting the occurrences of S, H, E, and L, one would know which is the bigger issue. This argument seems reasonable, but the real problem is the categorization method is too crude. There are numerous different environments, hardware, and software; summing them up as one doesn’t seem sensible. Moreover, if the focus is on the interface, each interaction’s interface differs, making it odd to aggregate them. Thus, directly using SHELL MODEL statistics for safety management is not a common practice seen in companies.
  • More troublesome, SHELL MODEL doesn’t explain how the final error occurs. For instance, in the unintentional activation of an escape slide, someone didn’t correctly switch the door mode (from ARM to DISARM). In SHELL MODEL, this is due to another person’s L factor. However, it doesn’t explain how “a person” didn’t see the indicator lights or had an incorrect mental model leading to the execution of the final wrong action – SHELL MODEL lacks the capacity to address this issue.

Therefore, if one really wants to manage human factors, relying solely on the SHELL MODEL is not feasible. The root of this issue is the categorization problem of human factors. Consequently, we need to seek other models.



Unsafe Behaviors in Human Factors: Categorization Based on Psychological Processes

Categorizing unsafe behaviors related to human factors has always been a perplexing problem. Before tools like HFACS, the categorization of unsafe behaviors related to human factors was primarily based on phenomenal categorization. For example:

  • A was walking while looking at the phone, which led him to not notice the situation ahead and hit a utility pole. This behavior of not paying attention to objects ahead while walking might be deemed negligence.
  • B was driving while looking at the phone, which led him to not notice the situation ahead and hit a person. The phenomenon looks the same, both involved looking at the phone and colliding, the only difference being hitting a utility pole or a person, which shouldn’t affect the categorization. It could also be deemed negligence.

If completely following purely phenomenal categorization, we might consider these two incidents as the same type of unsafe behavior in human factors. However, some people might quickly raise different views:

According to the Road Traffic Management and Penalty Act 31-1, drivers using handheld devices such as mobile phones or computers while driving on the road, engaging in dialing, talking, data communication, or other actions jeopardizing driving safety, will be fined NT$3,000. Therefore, this may violate relevant rules. Drivers who have passed the national driving license test and obtained a license should know that texting while driving is not allowed, indicating intentional violation in human factors and should be categorized as willful violation.

Here, you might notice a key question:

How do we know if “the driver knows texting while driving is not allowed?”

Indeed, we don’t know. This highlights a crucial difficulty in categorizing unsafe human factor behaviors: without understanding the actual mental process of the individual, it is impossible to accurately categorize unsafe behaviors. In this example:

  • Both drivers A and B may need to allocate attention resources to watching the road while walking/driving and using the phone, leading to an error based on attention.
  • However, if B knew about the rule against texting while driving but did it anyway, it is an intended deviation or violation. Conversely, if B was unaware and still went on the road due to insufficient skills, it indicates a training and skills problem. But which one is B? This requires asking B personally, hoping for an honest and sincere response to know the answer.

Therefore, categorizing unsafe behaviors related to human factors needs to be based on the root causes in psychological processes. The focus of categorization lies in clarifying which psychological process error led to the occurrence of the unsafe behavior. Subsequently, different psychological process errors will guide the development of various risk mitigation measures and effective metrics to observe the changes and evaluate the effectiveness of interventions.

Based on the root causes of psychological processes, unsafe behaviors can be categorized into five types: Slip, Lapse, Perception Error, Mistake, and Violation. They are measured using the four categories from HFACS: skill, perception, decision, and compliance. Finally, corresponding risk mitigation measures are applied for different categories. Below is our categorization of unsafe behaviors.



Human Factors Analysis and Classification System (HFACS)

From the above analysis, we can draw some conclusions:

  • An “unsafe behavior” in human factors is the direct action leading to the occurrence of an outcome. Categorizing unsafe behaviors should consider the root causes of psychological processes.
  • The SHELL MODEL illustrates that “human factors” involve various interfaces with humans, such as software, hardware, environment, and other people, all of which impact humans.

Since there are numerous methods for categorizing human factors analysis, considering both of the above, the most recommended currently is the “Human Factors Analysis and Classification System” (HFACS; Shappell & Wiegmann, 2003). HFACS is a relatively large-scale human factors analysis tool and classification system. We will now introduce how this model operates in detail.


The HFACS Model Based on Reason’s Model

Reason's Model under HFACS
Using Reason’s Model to analyze human factors transforms into HFACS. Sourced from Shappell & Wiegmann (2003).

HFACS is a human factors classification model based on Reason’s Model (commonly known as the Swiss cheese model). Reason’s Model states that:

No single cause leads to an accident. It is always a series of defense failures that ultimately leads to an accident.

This concept is utilized by HFACS, where even if there is a final unsafe behavior that directly causes an accident, there must have been a series of human-related defense failures beforehand. Therefore, the value of HFACS lies in its stratification of human factors and clear definitions of the contents for different layers and categories.

This is why we choose HFACS: although there are many ways to categorize human factors, Shappell & Wiegmann’s (2003) HFACS divides human factors into four layers: unsafe behaviors, preconditions for unsafe acts, unsafe supervision, and organizational influence. When every layer fails to block the risks from progressing, accidents occur. This multi-layered human factors analysis framework helps us systematically identify the root causes of human factors and develop improvement measures for different layers.

The original HFACS by Shappell & Wiegmann (2003) has four layers:

  • First layer: Active Failure – Unsafe Acts
  • Second layer: Latent Failure – Preconditions for Unsafe Acts
  • Third layer: Latent Failure – Unsafe Supervision
  • Fourth layer: Latent Failure – Organizational Influence

Adaptations of HFACS Based on Practical Needs: Example of NASAHFACS

After HFACS was proposed, it offered us a broad concept: the classification of human factors can be stratified to form a three-dimensional analysis. However, the content of the classification is not strictly defined. Subsequent scholars and organizations have made modifications based on their practical needs. An evaluation based on 43 HFACS-published studies found that over 60% of HFACS models are individually modified (Hulme et al., 2019). Since our goal is to introduce error root causes for understanding, we prioritize easy comprehension. Hence, we use the NASAHFACS modified by NASA for human factors as a framework, which has been appropriately reduced for illustration. This doesn’t detract from other HFACS models.

Using the NASAHFACS framework from NASA, the following benefits are notable:

  • NASAHFACS manuals and quick references can be downloaded from the official website, providing open information.
  • Compared to other HFACS versions, we believe it has clearer explanation definitions, and the subcategories are more explicit.
  • Compared to the original HFACS terminology, NASAHFACS uses more neutral language, aligning with NASA’s view on human factors: besides identifying shortcomings, it also recognizes strengths, marking and classifying areas of strong competency.

An NASAHFACS diagram looks like this. Importantly, as our purpose is solely to introduce psychological processes, we only select key parts to illustrate; this reduced content is only suitable for learning about psychological processes, not representing NASA’s intent or suggesting that NASA uses a reduced method for classification evaluation. For research, please refer to the NASAHFACS manual.

NASA HFACS v1.4
NASA HFACS v1.4

For the original NASA-HFACS manual, please refer to NASA OSMA (Office of Safety and Mission Assurance) Human Factors page.



Categorization of Unsafe Behaviors

Originally under HFACS, referred to as unsafe acts, NASAHFACS categorizes them simply as Acts. Below are four major classifications: Decision-Making, Skill-Based, Perception, and Compliance. Here, we have rearranged the sequence of categories based on psychological processes and significantly reduced the classifications for illustration, retaining only frequently used subcategories. Consequently, this selective introduction cannot reflect the original intent of NASAHFACS; for comprehensive information, please make sure to return to the original page to review the guiding manual.


Perception

From the cognitive psychology perspective on information processing, human information processing, like a computer, begins with input from sources such as keyboards or mice. Perception errors occur during the input stage, such as misreading or mishearing, leading to incorrect subsequent information processing.

CodeContentDetailsExample
AP101Error due to MisperceptionErrors occurring during the perception (information input) stage, such as misreading, mishearing, or hallucinations.Pilots unfamiliar with the instrument panel design misread the left engine vibration indicator as the right engine vibration, leading to the shutdown of the normally operating engine. (January 8, 1989, British Midland Airways Flight 92)

Skill-Based

Compared to the perception level, which is at the input stage of information processing, skill-based errors occur at the output stage, resulting in unintended outcomes in non-deliberate situations. Generally, the core cause of skill-based errors is attention or memory failures. Occupational deficiencies, lack of knowledge on execution leading to skill errors, also fall under this category. However, these are evaluated in latent failures, like individual preparedness issues, incomplete supervision training problems, and organizational training manual regulations.

CodeContentDetailsExample
AS101Inadvertent OperationsNon-deliberate actions by individuals, where equipment, switches, or control surfaces are accidentally activated or deactivated. Individuals may not be aware of these actions.Crew members accidentally spilled coffee in the cockpit, causing coffee to splash onto the radio control panel, which started to smoke and interfere with radio communications, leading to a diversion. (February 6, 2019, Smartwings Flight 2116)
AS102Checklist / Procedural ErrorErrors in executing procedures or checklists due to attention, memory errors, or occupational incompetence.
– Error due to AttentionErrors in procedures typically resulting from automation process mistakes rooted in attention.During flight, a crew member opened the cockpit door without verifying and turned the rudder trim knob, causing the aircraft to roll 180 degrees. (September 6, 2011, All Nippon Airways Flight 140)
– Error due to MemoryErrors in procedures typically resulting from memory, usually something should have been done but wasn’t.Crew changed intended takeoff runway during calculation focus, didn’t use the checklist, and failed to configure the aircraft for takeoff, leading to a crash due to insufficient lift. (August 16, 1987, Northwest Airlines Flight 255)
– Error due to Occupational CompetenceErrors in procedures due to insufficient training making individuals unable to meet procedural needs.During approach phase, facing a stall, the crew raised the aircraft’s angle of attack and retracted flaps, decreasing airspeed further, contrary to stall recovery procedures. (February 12, 2009, Colgan Air Flight 3407)

Decision-Making

Decision-making errors in psychological processes occur when individuals fail to form effective mental models, or form incorrect ones, leading to erroneous subsequent actions. Using a math problem as an analogy, it’s not about misreading or calculation errors, but about misunderstanding the question, resulting in calculations that fit one’s mental model expectations at that time but ultimately yield incorrect answers.

CodeContentDetailsExample
AD101Incorrect Action ExecutedDue to an incorrect mental model, individuals incorrectly prioritize task execution or perform incorrect actions.Despite left engine vibration issues, pilots mistakenly shut down the right engine, and when disengaging the autothrottle, reduced fuel input to the left engine, stopping the vibrations, misleading pilots to confirm it as the right engine issue. (January 8, 1989, British Midland Airways Flight 92)
AD102Inadequate Real-Time AssessmentInappropriate risk assessment of consequences from actions, leading to incorrect decisions and ensuing unsafe situations.Poor weather at the destination prevented successful landings on two attempts. According to the manual, unless justified, landing attempts should be diverted to alternate airports. Despite considering fuel levels, the crew attempted a third approach, failing again, and declared low fuel en route to the alternate airport. (July 8, 2018, China Airlines Flight 170)
AD103Ignore a Caution / WarningIndividuals perceive and understand warnings but disregard them.Despite entering instrument weather after passing the missed approach point, the crew, unable to see the runway, intended to land with excessive descent rates. The crew ignored 17 EGPWS alerts, eventually leading to a crash into the sea a few kilometers before the runway. (September 28, 2018, Air Niugini Flight 73)

Compliance

Previously more commonly referred to as Violation, compliance is based on the psychological process leading to intentional nonconformity, resulting in unsafe behaviors. The term itself might have negative connotations, potentially causing resistance or disagreement from those under investigation; thus, NASA endeavors to use more neutral terms, aiming to restore the psychological process of intention to disobey as much as possible.

CodeContentDetailsExample
AC101Violation: Work Around, Widespread, RoutineSystematic execution of deviations from procedures and regulations without risk assessment. Typically, these deviations occur without disciplinary or management actions.Cockpit crew continually fail to comply with standard operating procedures without any management actions. (July 23, 2014, TransAsia Airways Flight 222; February 4, 2015, TransAsia Airways Flight 235)
AC102Violation: Lack of DisciplineIndividuals violating regulations without specific reasons or needs. Such violations are typically isolated and not prevalent across groups. There is no evidence supporting these violations were instructed or authorized.Following an incident, crew members from the organization tested positive for alcohol, suggesting alcohol influence during duty. (September 14, 2008, Aeroflot Flight 821)

Because different types of errors have their own root psychological processes, our goal is to identify the root causes of these psychological errors and enhance understanding of these error types. Subsequent chapters will introduce possible causes due to attention and memory, mental model errors, and inability to comply with regulations.



Conclusion

Aircraft operations, maintenance, cabin services, ground handling, and dispatching—any environment involving human activity faces potential unsafe behaviors due to human factors. To help colleagues understand the significant impact of human errors on operations, IOSA and other aviation company inspection standards require personnel training to include human factors-related training. However, human factors is not just a “knowledge” based discipline but a “posture” based one. By reflecting on human factors in their personal life experiences, trainees can maintain humility about human error potential and remind themselves to be cautious and vigilant. This is ultimately our goal: human factors training is less about knowledge but more about learning from mistakes to bring about real change.



References

Hsu, H., Li, L., Hsu, Y., & Li, W. (2010). Application of “Human Factors Analysis and Classification System” and “Flight Safety Voluntary Report” to Analyze Potential Flight Risks of Air Force Transport Fleet. Journal of Crisis Management, 7(2), 59-68.

Wiegmann, D. A. & Shappell, S. A. (2003). A Human Error Approach to Aviation Accident Analysis: The Human Factors Analysis and Classification System. Ashgate.

Hulme, A., Stanton, N. A., Walker, G. H., Waterson, P., Salmon, P. M. (2019). Accident Analysis in Practice: A Review of Human Factors Analysis and Classification System (HFACS) Applications in the Peer-Reviewed Academic Literature. In Human Factors and Ergonomics Society 2019 Annual Meeting.

NASA OSMA: https://sma.nasa.gov/sma-disciplines/human-factors

You may also like

Leave a Comment