Planning for the Unimaginable

Excerpts from the book

Confronting Complexity

X-Events, Resilience, and Human Progress

by

John L. Casti

Roger D. Jones

Michael J. Pennock

preface

table-of-contents

Click to Buy Paperback

E-Book

Bundled

 

One of the threads running through this book is the notion that there can be no human progress without X-events. Of course, here we are speaking of the kind of “progress” that’s revolutionary in character, not evolutionary. This is progress in which visible, meaningful change happens rapidly enough that we can often see the change taking place before our very eyes. The argument we’ve given for the claim that X-events are a necessary condition for this type of progress is that revolutionary change requires that existing social, economic, and/or political structures that have outlived their usefulness be swept away. But if there is anything the power structures in modern society—mostly politicians, banks, and megacorporations— want, it is to maintain the status quo. So the only thing that can overcome that sort of power-induced stasis is an “act of god,” i.e., an X-event, some- thing that the existing power structure is powerless to prevent.

When presenting this line of argument in public, reactions often take the form: do you mean we should deliberately go out and destroy the social structures that most of us depend upon for our daily lives, in order to open up niches for these new structures that you say constitute “progress”? Well, not quite. We’re not advocating total and complete anarchy here. Throwing out the baby with the bathwater doesn’t really benefit anyone, especially the baby. What we do advocate, though, is that humans be a bit more imaginative and devote a lot more thought into seeing how to create controlled X-events that will clean out the Augean stables without destroying the horses in the process. In other words, we should deliberately promote experiments that will not kill the experimenter. In fact, we already engage in such activities, albeit without labeling them “X-events” (yet!). Here are a couple of examples.

 

If left unattended, forests will continue to grow until the trees fill up all available space for expansion. This means that there are way too many trees clustered too close together, so that when the inevitable lightning strike or insect infestation occurs, it spreads literally like wildfire through the forest and destroys everything. The forest is effectively leveled and the entire growth process must start again from its original (and literal) ground state. This is especially bad since the growth of a forest to maturity takes place on a times- cale measured in decades, if not centuries.

To prevent this type of calamity, forestry managers introduced the idea of a controlled burn to clear out dead trees, over-crowding and other conditions that interfere with the forest’s ability to resist attacks from insects, lightning, and other such existential threats. Logic and intuition argue that thinning out the forest in this way will help protect it from a catastrophic collapse. The US Forest Service, which manages 200 million acres of public land, believes it and contracts with logging companies to take out large and small trees using pre- scribed types of small fires.

The Forest Service approach is based upon tree ring studies, and is an attempt to reconstruct the forests of the western United States to their state before the twentieth century. But recently ecologists and environmentalists began to argue that such a procedure of controlled X-events rests upon shaky science and is actually environmentally harmful. They argue that tree ring records do not tell the whole story, and that forests have historically suffered much more severe fires than the rings would argue.

These counter-arguments to controlled burns imply that the ecology of forests depends on fires of many different degrees of scale and intensity— including what we would see today as catastrophic fires. In short, the argument is that large fires can actually stimulate biodiversity rather than destroy it. Of course, this is an argument of degree, not kind. Everyone agrees that burns are helpful. What seems to divide the Forest Service from the academic ecologists is the type of burns they advocate. The ecologists say that the matter should be left to nature to provide the scope and scale of the burns needed to promote diversity, and not left to managed burns created on the basis of incomplete science and human intuition. And it does not appear that either side would advocate burns that completely destroy the forest! So the question is, just how controlled is “controlled”? There is simply no uniform answer, as the forest burn issue illustrates.

Before leaving this point, let us note that the very same principle is at work in many other areas besides forests. When we go out to our backyard garden and start pulling weeds, we are thinning the vegetation in a way that we believe will help our tomato and cucumber plants have a better chance to produce something for the dinner table than if we’d left the garden alone. It’s clear that you don’t have to eliminate every single weed to end up with a nice home- grown salad. But it’s equally clear that you can’t let the weeds take over the garden either. A similar story can be told about raising cattle or sheep. We cull the herds so as to remove animals that are injured, genetically weak, or otherwise using resources that could be put to better use by more healthy animals in the herd. And in another direction entirely, we often pump water into earthquake fault zones to deliberately promote small, tension-relieving quakes to reduce the stress on tectonic plates that could otherwise rupture in the Big One.

 

As a last remark, let’s talk for a moment about humans instead of weeds or stragglers in a herd. Does introducing humans into the picture change the picture? Probably the first thought that comes to mind is that any public talk about “weeding out” humans would be akin to political suicide, evoking dark thoughts about genocides, social Darwinism, and the like. While in many cases that may well be the case, here is an example of how things may go in just the opposite direction.

About 20 years ago, the state of Oregon got fed up with the foot-dragging at the federal level in putting together a decent healthcare program of the sort everyone takes for granted in much of the rest of the industrialized world. So they decided to do it themselves. Here is a rather potted version of how it worked out. The details can be found on the Wikipedia page under “Oregon Health Plan.”

Basically, the state had a certain level of funds available to support the health plan. So they assembled a long list of medical conditions, procedures, drugs, and the like that their citizens might require for maintaining their health. The state then estimated how much it would cost to service each of these medical conditions and simply ordered these costs in some way. They then went down the list adding up the expenses until they ran out of available funds. At that point, a line was drawn and the state declared that they could not service any condition that fell below the line. This meant that many people with rare and/or extraordinary expensive medical conditions would not receive benefits from the state health plan. As just noted, one might have expected this procedure to generate a huge outcry from the state’s residents in general, and those with conditions below the line in particular, with declarations of social injustices and bias heading the list of grievances. But, in fact, even though there were grumblings here and there, no such massive outcry emerged. Why not?

While it’s difficult to give a full and complete explanation for why the residents didn’t call for impeachment of the governor and other legislators and initiate a flurry of lawsuits against the state for various types of biases, here are two of the main reasons: (1) the process of deciding what conditions did and did not fall below the line was a public process, and (2) everyone had an opportunity to voice their opinion about the process. Consequently, the process was perceived as being fair even though it generated winners and losers in this healthcare lottery. In essence, the residents of Oregon realized that there was simply not enough money to go around to protect everybody against everything. Some choices had to be made, and the process for making them was seen as fair.

 

What we have been describing as a controlled burn is nothing more than a reframing of the general issue we’ve spoken about before. That is the idea of a complexity gap that cannot be allowed to grow to the point where the stresses in the system must be relieved by a dramatic and devastating X-event. This message is important enough for us to revisit it here within the context of system resilience. Let’s start by re-examining the Oregon Health Plan.

From a systemic standpoint, we can view the Oregon Health Plan as two systems in interaction. One system is the Residents of the state with their accompanying health needs. If we consider the healthcare needs of an individual resident and then add these needs together for every single resident as a measure of the complexity of the Residents, that system has a huge complexity level somewhere in the many millions. The other system is the State, with its resources available to attend to these healthcare requirements. Suppose we measure the State’s complexity by the number of healthcare needs that it can actually finance. As already noted above, this complexity is considerably less than the needs of the Residents. So there is a gap, the difference between the needs of the Residents and the State’s ability to service those needs.

As long as that gap remains relatively small, there is no problem. Most of the needs of most of the people can be met and the dynamic balance between the two systems is sustainable. But as is almost always the case in such matters, the Residents begin asking for more of their needs to be serviced. But the State cannot raise money fast enough to keep up with these requests. So the complexity of the Residents grows at a faster pace than that of the State. Eventually, this gap widens to the point where either the State has to raise more healthcare revenues or the Residents have to reduce their demands. The usual situation is that neither of these remedies can be applied, with the inevitable result that the cohesion between the two systems snaps and there is a collapse. And while the Oregon Plan has not yet collapsed, it has undergone major changes over the past decade or so in order to close this gap. And at the moment the gap seems to have been reduced to a level that allows the two systems, Residents and State, to co-exist.

Many other examples from earlier in the book can be seen as illustrations of this same principle, so we won’t repeat them here. The point is that a resilient system would be able to anticipate that these dangerous gaps will arise, as well as have developed procedures for monitoring the change in complexity levels so as to be able to seen when the systems are entering the “yellow zone” where a collapse starts to become imminent. Finally, the resilient system would already have a plan in place to reduce the gap in order to stave off, or at least reduce, the impact of the resulting X-event.

 

The final question that usually arises in discussions of resilience is a simple one to ask, but fiendishly difficult to answer: how do we measure the resilience of a system? As with “complexity,” another very useful everyday word used differently in the scientific world than in ordinary life, what constitutes resilience and how it is measured probably has as many answers as there are researchers and scholars who are studying the problem. Earlier, we gave one partial answer when we argued for a definition of resilience in terms of the Four As. So let’s return to that definition and look at it now from the perspective of attaching a numerical measure to it in a given situation.

Click to Buy Paperback

E-Book

Bundled

Leave a Reply