Science and speculation

I recently read several articles about a visiting baseball player who was subjected to racial hazing in a game at Fenway Park. The sense of these articles is this attitude reflects on the city of Boston, and on America at large. This is an all-too-common tendency today, to extrapolate a statement, an incident, or even data, to have far broader applicability than the evidence warrants.

 

Science is much in the news, with accusations of “science denial” or climate change skepticism, Creationists disputing evolutionary evidence, scientist-celebrities making bold pronouncements, along with front-page scientific studies that were once lauded and have now been refuted (often on the back page).

 

Though the laws of science—gravitation, thermodynamics, the conservation of mass and energy—are fixed, for all practical purposes anyway, the interaction of influencing factors and forces in complex systems like the Earth’s climate, Lake Michigan, even local weather on a given day, can produce a variety of outcomes, some predictable, some surprising. Surprising not because the laws of science have been violated, but because the system, the combination of dozens or hundreds of factors and forces, couldn’t be adequately modeled, or the input to the model (data/design) was flawed or incomplete.

 

I’ve seen my share of bad science and bad data (sadly, guilty myself on occasion). I’ve learned that while we need to rely on data, honest skepticism is an important aspect of the scientific method. On many occasions, scientists—experts—have reached a consensus on something that was subsequently proven to be false. As Matt Ridley wrote in a 2013 Wall Street Journal article, “Science is about evidence, not consensus.” I’m with Mr. Ridley. I don’t care about consensus, no matter how passionate or morally indignant. I want to see the data and the evidence, and how it’s linked to conclusions.

 

Drawing broad conclusions from evidence or evidence-based models has inherent risks. This doesn’t mean we can’t (and don’t) rely on evidence and models, only that we should understand the limitations and risks of doing so. Some years back, The Wall Street Journal published my rebuttal to their news article entitled, Study Finds Global Warming Is Killing Frogs: “When science records what it observes, when it measures phenomena, and when it faithfully and accurately models that data, its findings are valid, useful and reliable. But when scientists…offer speculation…credibility and reliability are diminished, sometimes drastically. Thus, the observation that the frog population worldwide is declining…in combination with models that purport to demonstrate global warming, is not (yet) sufficient to assert the title of your article. This conclusion is speculative, as it is based on the assumption that warmer temperatures at higher elevations in Costa Rica are responsible for…the fungus that is infecting the frogs.”

 

If extrapolation of data/evidence is a problem with respect to the hard sciences, how much more so with the social sciences? What’s needed is a clear understanding of (1) how the evidence/data was obtained; (2) the extent to which this evidence/data applies to the system being studied, along with identification of any gaps or missing pieces; and (3) the extent to which the model faithfully describes the system being studied. Can speculative conclusions, such as Study Finds Global Warming Is Killing Frogs, be justified by the data and evidence? Stephen Hawking recently revised his “authoritative” conclusion that humankind has 1,000 years to escape the planet to 100 years. Hawking is a recognized expert on theoretical physics, but the fate of the planet is far too complex for 1,000 years, 100 years, or any number to be credible. Just because an authoritative individual or institution says something doesn’t make it so.

 

As to that fan, or handful of fans, at Fenway Park, what they said is on them, and based on the evidence, that’s what science would say too.

 

Ideology and Storytelling

Ideology Is poisoning storytelling and rattling readers Writers are often seduced by cultural genies to make characters into sock puppets and plot lines into bullhorns that parrot “smart” thinking or ridicule “regressive” perspectives and values. Thomas M. Doran

Source: Ideology Is poisoning storytelling and rattling readers | Catholic World Report – Global Church news and views

A Roadmap for Disaster Prevention

A company with a sterling reputation for quality and customer service experiences the killing of a child by a wild animal at one of its premier resorts.

Changes in water supply and treatment result in the release of lead from old pipes, producing a public health debacle.

The removal of a brutal dictator from a Middle Eastern country unleashes deadly sectarian conflict and emboldens opportunistic neighbors.

A new jail goes tens of millions of dollars over budget, resulting in project suspension, public outrage, and lawsuits.

In all these disasters, professional, competent, experienced, and honorable people were involved in decision-making and implementation—yes, they were, so how do such things happen?

Murphy’s Law? Karma? The law of averages? Nefarious characters?

Some ascribe such disasters to the theory of black swan events, a black swan being a metaphor for an unpredictable surprise that exerts major effects. Others accuse those involved of negligence or malfeasance, but none of these events meet the definition of a black swan event, they could have been predicted, and while negligence or malfeasance may have exacerbated the problems, most of the people involved weren’t derelict in their duties and didn’t commit crimes.

Another explanation more accurately describes these disasters: Big Picture Miasma, caused by not taking a disciplined and unhurried look at potential perils, along with the human inclination to avoid questions that can’t be definitively answered.

How can Big Picture Miasma be prevented? Is it possible, or is mitigating or softening impacts the best we can do?

A Big Picture process to prevent disasters could be deployed on projects or operations—military and otherwise, that are likely to impact health and safety/well being, and on larger projects and operations of any kind that have the potential to produce turmoil (the jail project) if they go upside-down. The process steps are:

  • Early exploration of Big Picture questions, such as:
    • What outcomes are most essential?
    • What outcomes must be prevented?
    • What could go wrong?
  • Making sure this exploration process doesn’t get lost in the details, or become mired in problem solving, an irresistible temptation that must be resisted as it distracts from identifying Big Picture risks, the sole purpose of this exercise.
  • Involvement of subject matter experts with no subsequent involvement in the project or operation, and not members of any of the involved organizations, and whose only role is to prompt, probe, and identify risks, especially disaster-level risks, with nothing off the table.
  • Though the primary expertise of subject matter experts should be health and safety/well being, experts with other specialties might be recruited based on the nature of the project or operation.
  • Succinct, layman-friendly (no legalese) documentation of disaster-level risks, with communication to the highest levels in the organization.

Rarely practiced, even in organizations with sophisticated project management and quality control practices. Why? Because these organizations and their procedures are too detail oriented. I’m confident that if this process had been used on the Flint water project, the disaster could have been prevented. All the cited disasters would likely have been identified with such a process and, once identified, might have been prevented.

Big Picture Miasma can destroy organizations and lives. So why isn’t such an unremarkable process deployed more often? Though problems are experienced on every project and in every operation, big problems are less common, and full-scale disasters even more rare. Disasters are sometimes avoided using traditional procedures, or in fortuitous ways, but what happens when a Flint, Michigan water problem occurs that could have been prevented with this simple process, and instead, thousands of people are impacted and some of those involved are taken down? The situation involving the Middle Eastern country is even worse, and even though only one life was lost at the resort, it was one life too many, and could have been prevented. As for the jail, though no lives were lost, common project management deficiencies were to blame and could have been headed off.

We can do better. We should do better. Most disasters aren’t black swans, or caused by Murphy’s Law or bad people. A Big Picture process can prevent many of these terrible disasters.

Thomas M. Doran, P.E., FESD, has been practicing and teaching project management, and developing PM processes, for 4 decades.