Human error may have contributed to the Key Bridge disaster — changing our approach to design can help reduce accidents

Human error may have contributed to the Key Bridge disaster — changing our approach to design can help reduce accidents

The Dali, right, sits amid the wreckage and collapse of the Francis Scott Key Bridge in Baltimore, Md., on April 1. (Kaitlin Newman/The Baltimore Banner via AP)

In the early morning of March 26, a 948-foot vessel struck the Francis Scott Key Bridge in Baltimore, Md., causing its collapse. Preliminary evidence shows that the container ship Dali lost power while transitioning out of the Baltimore harbour, which resulted in the ship’s crew losing control of its steering.

The U.S. National Transportation Safety Board (NTSB) launched an investigation into the accident to unveil its causes. In a recent press conference, NTSB chair Jennifer Homendy indicated it may take between 12 to 24 months for the investigation to complete.

PBS streams the U.S. National Transportation Safety Board press briefing.

While available information on the likely many contributing factors at play in the accident is sparse, it is suggested that human error may as well have played a role. When interviewed about potential causes of the accident, the director of the Seafarers International Research Centre at the University of Cardiff, Helen Sampson, did not rule out the possibility of someone’s mistake having contributed to the bridge’s collapse.

In a recent interview with Sky News, Sampson asked: “Was there some sort of miscommunication or misunderstanding between the pilot and the crew? Or was there a pilot error?”

Considering that the accident happened around 1:30 a.m. local time, fatigue might be a factor of concern, she added.


Read more:
A human, environmental and economic emergency response to the Baltimore Key Bridge collapse

Science of human error

Human error has been the topic of scientific investigation since the Second World War, when governments started paying attention and investing resources into making machines more usable and, in turn, effective. While the study of human factors, including human error, was restricted to the military, over the years the investigation of how to build more human-friendly technology has expanded to other fields.

See also  AI lending will make finance deals even more unfair for women – here’s how this can be avoided

In today’s world, experts in this field are employed across many industries. Automotive companies worldwide make use of human factors principles in the design of current interfaces and assistance systems. The development of mass-produced tech gadgets could not be possible without careful considerations being given to the needs and wants of global customers.

Despite the study of human error spanning decades, the fight for reducing its impact on accidents, let alone eliminating them, is not getting any easier. In fact, the more effort is directed into fixing this problem, the worse it seems to get.

Reducing the human factor

Over the years, engineering solutions have been aimed at reducing the role of humans: the source of human error. This is done by automating manual tasks; however, adding less-than-perfect automation to fields like transportation has not eliminated accidents.

For example, the recent introduction of semi-autonomous driving systems, where control of the vehicle is shared between the human driver and the system, has raised significant concerns about this technology’s safety.


Read more:
Companies oversell the self-driving capabilities of their cars, with horrific outcomes

The many news headlines about Tesla drivers misusing these systems as well as the 2023 U.S. Department of Transportation and Transport Canada recall on Tesla’s Autosteer system are evidence of how the safety of these systems is largely unproven.

Recent aviation accidents also point at the intrinsic limitations of adding more automation while disregarding key human factors. For example, in the case of the two crashes involving Boeing’s 737 Max 8 aircraft, factors including poor pilot training and incorrect mental representations of the automated system is what ultimately contributed to the pilots losing control of the navigation system.

See also  What is 'ethical AI' and how can companies achieve it?

two men in light blue uniforms stand next to a clear box on a table

Indonesian Navy personnel handle a box containing the flight data recorder recovered from the crash site of the Sriwijaya Air flight SJ-182 on Jan. 12, 2021. Indonesian aviation investigators concluded that a nearly decade-long failure to properly repair a malfunctioning automatic throttle, pilots’ overreliance on the plane’s automation system, and inadequate training led to the crash of a Boeing 737-500 last year that killed 62 people.
(AP Photo/Dita Alangkara)

Fixing human error

Human error cannot be fixed. Humans make mistakes, that is intrinsic to our nature. What can and should be fixed instead is our societal approach that attempts to make things better or safer by using unproven technology.

A human-centered design approach should be prioritized that considers and designs for human characteristics and limitations, rather than a technology-centered one that ignores or does not fully account for human factors. While some progress has been made, the road ahead is quite long still.

It’s still too early to determine the extent to which human error played a role in the Francis Scott Key Bridge collapse. What we know is that more effort is needed to ensure that current and future transportation systems put humans at the centre of their design, rather than relegating them to a peripheral role and holding them culpable only when things go wrong.

The Conversation

Francesco Biondi receives funding from transportation and research agencies in Canada.