What Could Go Wrong…And Why It Often Does

“Of course, we consider risks.”

 

That’s the response from governmental agencies and companies when the subject of risk preparedness is introduced. Then, why do we so frequently experience disasters like the compromised Fukushima reactor, British Petroleum oil spill, and Flint water contamination, or threats like the Kaspersky software “surprise”?

 

“Unpredictable events.” “Black Swans.” “A natural disaster.” “Limited human and financial resources.” “The perfect storm.” “Human negligence…malfeasance.” Plenty of explanations and excuses.

 

Here’s one more: the lack of a What Could Go Wrong process.

 

I can hear the objections: we have time-tested standards, a quality control process, risk officers, risk-based checklists, a project management program, experienced reviewers.

 

We can do a better job with threat identification and prevention. Different levels of risk require different approaches. For adhering to standards and design norms, checklists are okay, and most organizations do a good to very good job with this category of risks. Most do a fair to good job with conventional risks—how to prevent spills, what if a fire breaks out? Where we do a poor to awful job is with “barrier-free threats”, that is, risks that fall outside an organization’s scope of work or competency, or outside conventional risks, and have the potential to drastically affect a project, product performance, or a policy’s effectiveness.

 

A water treatment plant with a scope of work to produce treated water that meets drinking water standards at the plant’s fence line, and a community that doesn’t understand, or neglects, the consequences of unstable water conveyed by lead pipes outside the fence line. Kaspersky software, where a What Could Go Wrong process would have started with this question: Knowing what we do about Russia, what if this company is a “wholly owned subsidiary” of the Russian government. A new state-of-the-art German frigate that wasn’t designed to counter traditional threats—as a January 2018 Wall Street Journal article puts it: “(The) frigate was determined…to have an unexpected design flaw: It doesn’t really work”. And that’s highly regarded German engineering. The Fukushima nuclear reactor that wasn’t able to resist a tsunami, though it’s located on an island in an earthquake-prone region. Instead, we struggle with after-the-fact accusations, “patches”, litigation, and PR initiatives.

 

Such threats aren’t Black Swans—an asteroid strike, a Los Angeles magnitude earthquake in Michigan, nuclear war. Nor is the standard perfection. Rather, identifying unconventional risks—threats—by employing a disciplined, barrier-free What Could Go Wrong process.

 

The essentials of such a process are:

  • Activation as early as practical, before project initiation or policy definition
  • A leader who drives big picture What Could Go Wrong questioning, discourages small picture problem solving, and doesn’t allow “What if” questioning to be shut down by statements such as: “That’s outside our scope”, “There isn’t budget for that”, “We’re following all the standards”.
  • Engaging one or more subject-matter experts (“rabble rousers”) with no formal role in the enterprise and no incentive to tell the organization what it would like to hear.
  • 1-2 days is long enough to identify big-picture threats if the right people are present; not a costly or time-intensive process.
  • Concise and clear documentation of the big picture threats, for action as deemed appropriate by top leaders, not just the project team

 

If an organization doesn’t have a process along these lines and thinks that such disasters can’t happen to them, they’re kidding themselves, and while such a process wouldn’t protect against all possible threats, they could be better identified and might be mitigated. Just as important, risks could be communicated to the public, company leaders, etc., in an audience-appropriate manner, and considered in planning and decision-making. Such a process, done well, could have prevented the Flint and BP debacles, and could have mitigated the Fukushima disaster.

 

Lawyers often warn us that if we don’t know something we may not be as liable as if we knew ahead of time. The problems with this legally protective approach: 1) We have an ethical, if not legal, obligation to investigate and address threats, especially risks to health and public safety, and 2) How did this legally protective approach work for the principals in the Flint, Fukushima, and BP disasters? Addressing such threats appropriately ought to be one of a leader’s most important jobs. Far from being an assault on private enterprise or interference with government experts, such a process safeguards the interests of these entities because it helps them avoid disasters and grave harm.

 

How much time and money are organizations willing to spend on public relations and remediation in the wake of disasters, while neglecting big picture What Could Go Wrong questioning ahead of time? We can do better. We have the talent. We need the will and the process.