An Old Kind of Science

SUBHEAD: Americans who can’t afford health care or heating fuel in the winter still have cell phones and internet access.

By John Michael Greer on 18 December 2013 for Archdruid Report -
(http://thearchdruidreport.blogspot.com/2013/12/an-old-kind-of-science.html)


Image above: A homeless man with a guitar and recycled aluminum cans talks on cellphone. From (http://talesfromthelou.wordpress.com/2013/03/10/homeless-to-get-free-cellphones-in-california/).

The attempt to conquer nature—in less metaphorical terms, to render the nonhuman world completely transparent to the human intellect and just as completely subject to the human will—was industrial civilization’s defining project. It’s hard to think of any aspect of culture in the modern industrial West that hasn’t been subordinated to the conquest of nature, and the imminent failure of that project thus marks a watershed in our cultural life as well as our history.

I’ve talked here already at some length about the ways that modern religious life was made subservient to the great war against nature, and we’ve explored some of the changes that will likely take place as a religious sensibility that seeks salvation from nature gives way to a different sensibility that sees nature as something to celebrate, not to escape.

A similar analysis could be applied to any other aspect of modern culture you care to name, but there are other things I plan to discuss on this blog, so those topics will have to wait for someone else to tackle them. Still, there’s one more detail that deserves wrapping up before we leave the discussion of the end of progress, and that’s the future of science.

Since 1605, when Sir Francis Bacon’s The Advancement of Learning sketched out the first rough draft of modern scientific practice, the collection of activities we now call science has been deeply entangled with the fantasy of conquering nature.

That phrase “the collection of activities we now call science” is as unavoidable here as it is awkward, because science as we now know it didn’t exist at that time, and the word “science” had a different meaning in Bacon’s era than it does today. Back then, it meant any organized body of knowledge; people in the 17th century could thus describe theology as “the queen of the sciences,” as their ancestors had done for most of a thousand years, without any sense of absurdity.

The word “scientist” didn’t come along until the mid-19th century, long after “science” had something like its modern meaning; much before then, it would have sounded as silly as “learningist” or “knowledgist,” which is roughly what it would have meant, too.

To Francis Bacon, though, the knowledge and learning that counted was the kind that would enable human beings to control nature. His successors in the early scientific revolution, the men who founded the Royal Society and its equivalents in other European countries, shared the same vision.

The Royal Society’s motto, Nullius in Verba (“nothing in words”), signified its rejection of literary and other humanistic studies in favor of the quest for knowledge of, and power over, the nonhuman world.

The crucial breakthrough—the leap to quantification—was a done deal before the Royal Society was founded in 1661; when Galileo thought of defining speed as a measurable quantity rather than a quality, he kickstarted an extraordinary revolution in human thought.

Quantitative measurement, experimental testing, and public circulation of the results of research: those were the core innovations that made modern science possible. The dream of conquering nature, though, was what made modern science the focus of so large a fraction of the Western world’s energies and ambitions over the last three hundred years.

The role of the myth wasn’t minor, or accidental; I would argue, in fact, that nothing like modern science would have emerged at all if the craving for mastery over the nonhuman world hadn’t caught fire in the collective imagination of the Western world.

I mentioned last week that Carl Sagan devoted a passage in the book version of Cosmos to wondering why the Greeks and Romans didn’t have a scientific revolution of their own. The reason was actually quite simple. The Greeks and Romans, even when their own age of reason had reached its zenith of intellectual arrogance, never imagined that the rest of the universe could be made subordinate to human beings.

Believers in the traditional religions of the time saw the universe as the property of gods who delighted in punishing human arrogance; believers in the rationalist philosophies that partly supplanted those traditional religions rewrote the same concept in naturalistic terms, and saw the cosmos as the enduring reality to whose laws and processes mortals had to adapt themselves or suffer.

What we now think of as science was, in Greek and Roman times, a branch of philosophy, and it was practiced primarily to evoke feelings of wonder and awe at a cosmos in which human beings had their own proper and far from exalted place.

It took the emergence of a new religious sensibility, one that saw the material universe as a trap from which humanity had to extricate itself, to make the conquest of nature thinkable as a human goal. To the Christians of the Middle Ages, the world, the flesh, and the devil were the three obnoxious realities from which religion promised to save humanity.

To believers in progress in the post-Christian west, the idea that the world was in some sense the enemy of the Christian believer, to be conquered by faith in Christ, easily morphed into the idea that the same world was the enemy of humanity, to be conquered in a very different sense by faith in progress empowered by science and technology.

The overwhelming power that science and technology gave to the civil religion of progress, though, was made possible by the fantastic energy surplus provided by cheap and highly concentrated fossil fuels. That’s the unmentioned reality behind all that pompous drivel about humanity’s dominion over nature: we figured out how to break into planetary reserves of fossil sunlight laid down over half a billion years of geological time, burnt through most of it in three centuries of thoughtless extravagance, and credited the resulting boom to our own supposed greatness.

Lacking that treasure of concentrated energy, which humanity did nothing to create, the dream of conquering nature might never have gotten traction at all; as the modern western world’s age of reason dawned, there were other ideologies and nascent civil religions in the running to replace Christianity, and it was only the immense economic and military payoffs made possible by a fossil-fueled industrial revolution that allowed the civil religion of progress to elbow aside the competition and rise to its present dominance.

As fossil fuel reserves deplete at an ever more rapid pace, and have to be replaced by more costly and less abundant substitutes, the most basic precondition for faith in progress is going away. These days, ongoing development in a handful of fields has to be balanced against stagnation in most others and, more crucially still, against an accelerating curve of economic decline that is making the products of science and technology increasingly inaccessible to those outside the narrowing circle of the well-to-do.

It’s indicative that while the media babbles about the latest strides in space tourism for the very rich, rural counties across the United States are letting their roads revert to gravel because the price of asphalt has soared so high that the funds to pay for paving simply aren’t there any more.

In that contrast, the shape of our future comes into sight. As the torrents of cheap energy that powered industrial society’s heyday slow to a trickle, the arrangements that once put the products of science and technology in ordinary households are coming apart.

That’s not a fast process, or a straightforward one; different technologies are being affected at different rates, so that (for example) plenty of Americans who can’t afford health care or heating fuel in the winter still have cell phones and internet access; still, as the struggle to maintain fossil fuel production consumes a growing fraction of the industrial world’s resources and capital, more and more of what used to count as a normal lifestyle in the industrial world is becoming less and less accessible to more and more people.

In the process, the collective consensus that once directed prestige and funds to scientific research is slowly trickling away.

That will almost certainly mean the end of institutional science as it presently exists. It need not mean the end of science, and a weighty volume published to much fanfare and even more incomprehension a little more than a decade ago may just point to a way ahead.

I’m not sure how many of my readers were paying attention when archetypal computer geek Stephen Wolfram published his 1,264-page opus A New Kind of Science back in 2002. In the 1980s, Wolfram published a series of papers about the behavior of cellular automata—computer programs that produce visual patterns based on a set of very simple rules.

Then the papers stopped appearing, but rumors spread through odd corners of the computer science world that he was working on some vast project along the same lines.

The rumors proved to be true; the vast project, the book just named, appeared on bookstore shelves all over the country; reviews covered the entire spectrum from rapturous praise to condemnation, though most of them also gave the distinct impression that their authors really didn’t quite understand what Wolfram was talking about.

Shortly thereafter, the entire affair was elbowed out of the headlines by something else, and Wolfram’s book sank back out of public view—though I understand that it’s still much read in those rarefied academic circles in which cellular automata are objects of high importance.

Wolfram’s book, though, was not aimed at rarefied academic circles. It was trying to communicate a discovery that, so Wolfram believed, has the potential to revolutionize a great many fields of science, philosophy, and culture.

Whether he was right is a complex issue—I tend to think he’s on to something of huge importance, for reasons I’ll explain in a bit—but it’s actually less important than the method that he used to get there. With a clarity unfortunately rare in the sciences these days, he spelled out the key to his method early on in his book:
In our everyday experience with computers, the programs that we encounter are normally set up to perform very definite tasks. But the key idea I had nearly twenty years ago—and that eventually led to the whole new kind of science in this book—was to ask what happens if one instead just looks at simple arbitrarily chosen programs, created without any specific task in mind. How do such programs typically behave? (Wolfram 2002, p. 23)
Notice the distinction here. Ordinarily, computer programs are designed to obey some human desire, whether that desire involves editing a document, sending an email, viewing pictures of people with their clothes off, snooping on people who are viewing pictures of people with their clothes off, or what have you.

That’s the heritage of science as a quest for power over nature: like all other machines, computers are there to do what human beings tell them to do, and so computer science tends to focus on finding ways to make computers do more things that human beings want them to do.

That same logic pervades many fields of contemporary science. The central role of experiment in scientific practice tends to foster that, by directing attention away from what whole systems do when they’re left alone, and toward what they do when experimenters tinker with them.

Too often, the result is that scientists end up studying the effects of their own manipulations to the exclusion of anything else. Consider Skinnerian behaviorism, an immensely detailed theory that can successfully predict the behavior of rats in the wholly arbitrary setting of a Skinner box and essentially nothing else!

The alternative is to observe whole systems on their own terms—to study what they do, not in response to a controlled experimental stimulus, but in response to the normal interplay between their internal dynamics and the environment around them. That’s what Wolfram did. He ran cellular automata, not to try to make them do this thing or that, but to understand the internal logic that determines what they do when left to themselves.

What he discovered, to summarize well over a thousand pages of text in a brief phrase, is that cellular automata with extremely simple operating rules are capable of generating patterns as complex, richly textured, and blended of apparent order and apparent randomness, as the world of nature itself. Wolfram explains the relevance of that discovery:
Three centuries ago science was transformed by the dramatic new idea that rules based on mathematical equations could be used to describe the natural world. My purpose in this book is to initiate another such transformation, and to introduce a new kind of science that is based on the much more general types of rules that can be embodied in simple computer programs. (Wolfram 2002, p. 1)

One crucial point here, to my mind, is the recognition that mathematical equations in science are simply models used to approximate natural processes. There’s been an enormous amount of confusion around that point, going all the way back to the ancient Pythagoreans, whose discoveries of the mathematical structures within musical tones, the movement of the planets, and the like led them to postulate that numbers comprised the arche, the enduring reality of which the changing world of our experience is but a transitory reflection.

This confusion between the model and the thing modeled, between the symbol and the symbolized, is pandemic in modern thinking. Consider all the handwaving around the way that light seems to behave like a particle when subjected to one set of experiments, and like a wave when put through a different set. Plenty of people who should know better treat this as a paradox, when it’s nothing of the kind.

Light isn’t a wave or a particle, any more than the elephant investigated by the blind men in the famous story is a wall, a pillar, a rope, or what have you; “particle” and “wave” are models derived from human sensory experience that we apply to fit our minds around some aspects of the way that light behaves, and that’s all they are. They’re useful, in other words, rather than true.

Thus mathematical equations provide one set of models that can be used to fit our minds around some of the ways the universe behaves. Wolfram’s discovery is that another set of models can be derived from very simple rule-based processes of the kind that make cellular automata work.

This additional set of models makes sense of features of the universe that mathematical models don’t handle well—for example, the generation of complexity from very simple initial rules and conditions. The effectiveness of Wolfram’s models doesn’t show that the universe is composed of cellular automata, any more than the effectiveness of mathematical models shows that the Pythagoreans were right and the cosmos is actually made out of numbers.

Rather, cellular automata and mathematical equations relate to nature the way that particles and waves relate to light: two sets of mental models that allow the brains of some far from omniscient social primates to make sense of the behavior of different aspects of a phenomenon complex enough to transcend all models.

It requires an unfashionable degree of intellectual modesty to accept that the map is not the territory, that the scientific model is merely a representation of some aspects of the reality it tries to describe.

It takes even more of the same unpopular quality to back off a bit from trying to understand nature by trying to force it to jump through hoops, in the manner of too much contemporary experimentation, and turn more attention instead to the systematic observation of what whole systems do on their own terms, in their own normal environments, along the lines of Wolfram’s work.

Still, I’d like to suggest that both those steps are crucial to any attempt to keep science going as a living tradition in a future when the attempt to conquer nature will have ended in nature’s unconditional victory.

A huge proportion of the failures of our age, after all, unfold precisely from the inability of most modern thinkers to pay attention to what actually happens when that conflicts with cherished fantasies of human entitlement and importance.

It’s because so much modern economic thought fixates on what people would like to believe about money and the exchange of wealth, rather than paying attention to what happens in the real world that includes these things, that predictions by economists generally amount to bad jokes at society’s expense; it’s because next to nobody thinks through the implications of the laws of thermodynamics, the power laws that apply to fossil fuel deposits, and the energy cost of extracting energy from any source, that so much meretricious twaddle about “limitless new energy resources” gets splashed around so freely by people who ought to know better.

For that matter, the ever-popular claim that we’re all going to die by some arbitrary date in the near future, and therefore don’t have to change the way we’re living now, gets what justification it has from a consistent refusal on the part of believers to check their prophecies of imminent doom against relevant scientific findings, on the one hand, or the last three thousand years of failed apocalyptic predictions on the other.

The sort of science that Wolfram has proposed offers one way out of that overfamiliar trap.

Ironically, his “new kind of science” is in one sense a very old kind of science. Long before Sir Francis Bacon set pen to paper and began to sketch out a vision of scientific progress centered on the attempt to subject the entire universe to the human will and intellect, many of the activities we now call science were already being practiced in a range of formal and informal ways, and both of the characteristics I’ve highlighted above—a recognition that scientific models are simply human mental approximations of nature, and a focus on systematic observation of what actually happens—were far more often than not central to the way these activities were done in earlier ages.

The old Pythagoreans themselves got their mathematical knowledge by the same kind of careful attention to the way numbers behave that Wolfram applied two and a half millennia later to simple computer programs, just as Charles Darwin worked his way to the theory of evolution by patiently studying the way living things vary from generation to generation, and the founders of ecology laid the foundations of a science of whole systems by systematically observing how living things behave in their own natural settings.

That’s very often how revolutions in scientific fundamentals get started, and whether Wolfram’s particular approach is as revolutionary as he believes—I’m inclined to think that it is, though I’m not a specialist in the field—I’ve come to think that a general revision of science, a “Great Instauration” as Sir Francis Bacon called it, will be one of the great tasks of the age that follows ours.

.

No comments :

Post a Comment