What Could Go Wrong…And Why It Often Does

“Of course, we consider risks.”

 

That’s the response from governmental agencies and companies when the subject of risk preparedness is introduced. Then, why do we so frequently experience disasters like the compromised Fukushima reactor, British Petroleum oil spill, and Flint water contamination, or threats like the Kaspersky software “surprise”?

 

“Unpredictable events.” “Black Swans.” “A natural disaster.” “Limited human and financial resources.” “The perfect storm.” “Human negligence…malfeasance.” Plenty of explanations and excuses.

 

Here’s one more: the lack of a What Could Go Wrong process.

 

I can hear the objections: we have time-tested standards, a quality control process, risk officers, risk-based checklists, a project management program, experienced reviewers.

 

We can do a better job with threat identification and prevention. Different levels of risk require different approaches. For adhering to standards and design norms, checklists are okay, and most organizations do a good to very good job with this category of risks. Most do a fair to good job with conventional risks—how to prevent spills, what if a fire breaks out? Where we do a poor to awful job is with “barrier-free threats”, that is, risks that fall outside an organization’s scope of work or competency, or outside conventional risks, and have the potential to drastically affect a project, product performance, or a policy’s effectiveness.

 

A water treatment plant with a scope of work to produce treated water that meets drinking water standards at the plant’s fence line, and a community that doesn’t understand, or neglects, the consequences of unstable water conveyed by lead pipes outside the fence line. Kaspersky software, where a What Could Go Wrong process would have started with this question: Knowing what we do about Russia, what if this company is a “wholly owned subsidiary” of the Russian government. A new state-of-the-art German frigate that wasn’t designed to counter traditional threats—as a January 2018 Wall Street Journal article puts it: “(The) frigate was determined…to have an unexpected design flaw: It doesn’t really work”. And that’s highly regarded German engineering. The Fukushima nuclear reactor that wasn’t able to resist a tsunami, though it’s located on an island in an earthquake-prone region. Instead, we struggle with after-the-fact accusations, “patches”, litigation, and PR initiatives.

 

Such threats aren’t Black Swans—an asteroid strike, a Los Angeles magnitude earthquake in Michigan, nuclear war. Nor is the standard perfection. Rather, identifying unconventional risks—threats—by employing a disciplined, barrier-free What Could Go Wrong process.

 

The essentials of such a process are:

  • Activation as early as practical, before project initiation or policy definition
  • A leader who drives big picture What Could Go Wrong questioning, discourages small picture problem solving, and doesn’t allow “What if” questioning to be shut down by statements such as: “That’s outside our scope”, “There isn’t budget for that”, “We’re following all the standards”.
  • Engaging one or more subject-matter experts (“rabble rousers”) with no formal role in the enterprise and no incentive to tell the organization what it would like to hear.
  • 1-2 days is long enough to identify big-picture threats if the right people are present; not a costly or time-intensive process.
  • Concise and clear documentation of the big picture threats, for action as deemed appropriate by top leaders, not just the project team

 

If an organization doesn’t have a process along these lines and thinks that such disasters can’t happen to them, they’re kidding themselves, and while such a process wouldn’t protect against all possible threats, they could be better identified and might be mitigated. Just as important, risks could be communicated to the public, company leaders, etc., in an audience-appropriate manner, and considered in planning and decision-making. Such a process, done well, could have prevented the Flint and BP debacles, and could have mitigated the Fukushima disaster.

 

Lawyers often warn us that if we don’t know something we may not be as liable as if we knew ahead of time. The problems with this legally protective approach: 1) We have an ethical, if not legal, obligation to investigate and address threats, especially risks to health and public safety, and 2) How did this legally protective approach work for the principals in the Flint, Fukushima, and BP disasters? Addressing such threats appropriately ought to be one of a leader’s most important jobs. Far from being an assault on private enterprise or interference with government experts, such a process safeguards the interests of these entities because it helps them avoid disasters and grave harm.

 

How much time and money are organizations willing to spend on public relations and remediation in the wake of disasters, while neglecting big picture What Could Go Wrong questioning ahead of time? We can do better. We have the talent. We need the will and the process.

Why We Can’t Build Infrastructure Like We Used To [Mackinac Center]

Regulatory burdens just as much to blame as political gridlock

Source: Why We Can’t Build Infrastructure Like We Used To [Mackinac Center]

WSJ: Pollution used to Mean More Than Just CO2

I had the opportunity to comment on a recent Wall Street Journal article by Bjorn Lomborg about climate change, CO2, President Trump, China, and India.

A regrettable outcome of the media-hogging climate change debate is that the measure of pollution by nations has been reduced to carbon dioxide emissions, a rather benign compound apart from its relationship to climate change. In “The Charade of the Paris Treaty” (Review, June 17-18, 2017), Bjorn Lomborg succumbs to this myopic view when he states, “He (President Trump) failed to acknowledge that global warming is real and wrongly claimed that China and India are ‘the world’s leading polluters'”. Mr. Trump is actually on to something if we were to broaden the definition of pollution, as we once did, to include polluting chemicals that contaminate water, air, and the land, including habitats. Nations like China and India are among the most egregious polluters when this more liberal, and comprehensive, definition of pollution is applied.

Science and speculation

I recently read several articles about a visiting baseball player who was subjected to racial hazing in a game at Fenway Park. The sense of these articles is this attitude reflects on the city of Boston, and on America at large. This is an all-too-common tendency today, to extrapolate a statement, an incident, or even data, to have far broader applicability than the evidence warrants.

 

Science is much in the news, with accusations of “science denial” or climate change skepticism, Creationists disputing evolutionary evidence, scientist-celebrities making bold pronouncements, along with front-page scientific studies that were once lauded and have now been refuted (often on the back page).

 

Though the laws of science—gravitation, thermodynamics, the conservation of mass and energy—are fixed, for all practical purposes anyway, the interaction of influencing factors and forces in complex systems like the Earth’s climate, Lake Michigan, even local weather on a given day, can produce a variety of outcomes, some predictable, some surprising. Surprising not because the laws of science have been violated, but because the system, the combination of dozens or hundreds of factors and forces, couldn’t be adequately modeled, or the input to the model (data/design) was flawed or incomplete.

 

I’ve seen my share of bad science and bad data (sadly, guilty myself on occasion). I’ve learned that while we need to rely on data, honest skepticism is an important aspect of the scientific method. On many occasions, scientists—experts—have reached a consensus on something that was subsequently proven to be false. As Matt Ridley wrote in a 2013 Wall Street Journal article, “Science is about evidence, not consensus.” I’m with Mr. Ridley. I don’t care about consensus, no matter how passionate or morally indignant. I want to see the data and the evidence, and how it’s linked to conclusions.

 

Drawing broad conclusions from evidence or evidence-based models has inherent risks. This doesn’t mean we can’t (and don’t) rely on evidence and models, only that we should understand the limitations and risks of doing so. Some years back, The Wall Street Journal published my rebuttal to their news article entitled, Study Finds Global Warming Is Killing Frogs: “When science records what it observes, when it measures phenomena, and when it faithfully and accurately models that data, its findings are valid, useful and reliable. But when scientists…offer speculation…credibility and reliability are diminished, sometimes drastically. Thus, the observation that the frog population worldwide is declining…in combination with models that purport to demonstrate global warming, is not (yet) sufficient to assert the title of your article. This conclusion is speculative, as it is based on the assumption that warmer temperatures at higher elevations in Costa Rica are responsible for…the fungus that is infecting the frogs.”

 

If extrapolation of data/evidence is a problem with respect to the hard sciences, how much more so with the social sciences? What’s needed is a clear understanding of (1) how the evidence/data was obtained; (2) the extent to which this evidence/data applies to the system being studied, along with identification of any gaps or missing pieces; and (3) the extent to which the model faithfully describes the system being studied. Can speculative conclusions, such as Study Finds Global Warming Is Killing Frogs, be justified by the data and evidence? Stephen Hawking recently revised his “authoritative” conclusion that humankind has 1,000 years to escape the planet to 100 years. Hawking is a recognized expert on theoretical physics, but the fate of the planet is far too complex for 1,000 years, 100 years, or any number to be credible. Just because an authoritative individual or institution says something doesn’t make it so.

 

As to that fan, or handful of fans, at Fenway Park, what they said is on them, and based on the evidence, that’s what science would say too.