IV. A Patchwork of Metaphor

Patchworks of scientific metaphor, asymptotically approaching reality by allowing us to comprehend abstract relations through comparison with concrete ones.

The language used so far may have seemed too removed from the number-crunching chores of experimental science to be anything else than a whimsical indulgence. We have waxed wishy-washy of measuring devices as “perceiving” and “responding”, of the environment as “sending signals”, and of “filters” and “categories” as pursuing some virtual, transcendental existence everywhere we look. By the backdoor, we have smuggled in a whole ensemble of precisely the kind of dim anthropomorphisms and dodgy metaphors that we are normally advised to steer clear of. But to become aware of relations more abstract than our basic-level categories, in other words to extend our umwelts, comparison with concrete relations already in our umwelt may be the only method we have available. And what is metaphor but the act of understanding one thing in terms of another?

Concrete analogs rarely capture more than a few aspects of the abstract pattern that we wish to comprehend, but they constitute bundles of logic that may be mentally combined, twisted and tinkered with until they grasp the abstract pattern as a whole. A cluster of sprawling analogs are like a clanging machinery that careful thought can hammer into harmony. The patchwork that results possesses none of the effortless flow of the fabric of reality itself, seamlessly sweeping past our heavy-handed attempts at describing it. However, what analogy we choose is still not arbitrary – “signal” and “filter” are, for example, more appropriate than “guitar” and “refrigerator” – and this non-arbitrariness makes analogy a flawed but irreplaceable wellspring of new understandings. The choice of analogy influences what hypotheses we choose to test, and how we interpret experimental results. Therefore, let us hammer the analogies introduced so far into a coherent whole, and explore how more rigorous quantitative methods – like information theory and probability theory – can be derived from them, and provide their hazy hunches about the Universe with some number-crunching credibility.

To begin with, there is that low-key piece of metaphor suggesting that the dynamic of the Universe is organized into bounded entities called “systems”. You won’t find a scientist today who does not make regular use of this concept in his work, but you will be hard pressed to find one eager to elaborate on why reality seems populated by systems when nothing says it must. While systems could be quite unproblematically defined as “changing sources of observations” – covering things as diverse as atoms, the Earth’s atmosphere, the bacterial population in a petri dish, and human organizations – these sources would have to maintain some degree of self-similarity over time in order to be continuously observed, and the definition does nothing to explain the origin of this self-similarity.

In some cases, it could be externally imposed, such as a chemical system insulated by the walls of a beaker, but other systems, like a rain storm or the human episodic memory, can be repeatedly measured without being spatially well-defined. It carries the intriguing implication that some dynamics somehow are capable of sustaining themselves for long periods of time, while other wisps of activity soon dissolve out of failure to achieve this degree of individuality. There is something in the architecture of their interactions, in how their components combine and constrain each other, that affords these patterns of change stability even as the substances they are made up of are constantly replaced. And whatever this something is, it is what provides a way for us to partition nature into features and aspects, from the solid, massy entities we take as unambiguously real, to the tenuous statistical patterns that keep scientists awake at night.

The dynamic of the Universe is nearly decomposable into systems.

Part of this something is touched upon by the concept of “feedback”, borrowed from self-regulating artifacts like sound amplifiers and thermostats. Simple feedback guides planetary orbits and mechanical systems by continually plugging in the results of Newton’s equations into the same equations, forming a never-ending loop. These systems tend to be orderly to the point of boringness. Myriads of such simple components can, however, if supplied with energy, interact to realize positive feedback, as seen in turbulent flow, in which large eddies break up in a hierarchy of ever-smaller eddies, until energy can dissipate through molecular diffusion. Such systems are thermodynamically open, “chaotic”, and not particularly long-lived.

A much more robust class of systems is that of “complex adaptive systems”, which includes ant colonies, the immune system, and human economies, and are dynamically organized so as to learn. Negative feedback, in which the system architecture is continually updated via trial-and-error, allows them to accommodate to past disturbances, form an internal schema of their environments, and use this schema to take pre-emptive action against predicted threats, thereby maintaining its non-equilibrium state. In the context of biology, this mechanism is familiar as “homeostasis”. Homeostasis is what distinguishes a living organism from a dead one – a corpse is at thermodynamic equilibrium with its environment, while a living one is poised at a state particularly suited for avoiding becoming a corpse and to remain a system.

Homeostasis may be compared to a tray of water that, in order to be stable on average, uses cues to predict future disturbances, so as to take pre-emptive actions that offset them. The system is therefore in constant non-equilibrium.

The deceptively straight-forward notion of a “system” therefore hides a lot of ambiguity and unanswered questions. We have already discussed qualitative accounts of the emergence of systems, like how the existence of structure may be explained by how spontaneously formed boundaries are inherently likely to survive, and therefore also to spontaneously combine into “near-decomposable” hierarchies. Theorists working in the fields of computer science, complexity science, and network science are only beginning to develop a vocabulary to characterize these issues with.

To maintain self-similarity, an adaptive system would have to be bounded in the sense that it has evolved an interface that causes a detected perturbation to induce a dynamical modification – a learned response. It is hard not to imagine this interface as some spectral kind of membrane, with a slippery texture that hovers between a liquid and solid state so as to cautiously accommodate external pressure without succumbing to it. The interface can be said to have a filter – a set of categories of perturbations that it has adapted to, is responsive to, and without which its current structure would make no sense. Its filter is therefore not like a simple fishing net with only one type of hole that restricts what kind of fish it may catch. Rather, it classifies disturbances into input categories in order to respond with what its past has taught it to be a relevant action.

Categories are meaningful, for distinguishing between them has consequences for a system's survival.

Reconsider the flower-watering device from last section. Even though this system is an artifact, it can be said to have evolved via adaptation and natural selection, for it is a result of an analogous iteration of trial and error which went into its conception and construction. Its structure is a consequence of environmental constraints – including the problem-solving goal of its inventor, cost considerations and basic physical restrictions – as if it were molded by an invisible hand. In the survival of the fittest, if the device would fail to satisfy its intended function – to control what flower to water as a function of mass – it would be dismantled, and some other clever design would take over its niche. This way, natural selection has defined a filter, here composed of the six different weight ranges – the set of input categories that it has prepared responses for – which support its continued existence and consequently make the categories meaningful. Note that the filter is a small subset of all possible influences, and only includes other systems that were significant during its formative context (the device does not behave meaningfully in response to, say, a laser ray, because there was no selective pressure for it). This makes the filter analogous to the domain – the set of valid inputs – for a mathematical function f(x).

We could think of the system as projecting a space of possibilities upon reality, of categories of scenarios that could happen. While a human observer, also a system, may describe the input stone along an unlimited number of sensory dimensions, like color, texture, and shape, as far as the flowers’ destiny is concerned, the only property that matters is the stone’s mass. This mass could in principle be specified to an arbitrary number of decimals, but for the flowers, only how the mass relates to category boundary conditions is of any significance. If no flowers were present, then input/output categories would be a continuum without meaningful discrete boundaries. Likewise, in the absence of a human observer, the placement of a stone on the scale would be indistinguishable from any other physical interaction, as though a written letter were just ink transferred from pen to paper. In short: for there to be an input, there must be something there to care about it. Natural selection, in the expanded sense used here, is what infuses structure with an element of intentionality. It is what determines what systems support the survival of another system, and what confers the abstract thing that humans have found it useful to refer to as “meaning”.

In the philosophical literature this idea is known as “pragmatism”, according to which physical boundaries are statistical discontinuities that are made significant by other systems who learn about them and come to depend on the predictability that they afford. It is an idea associated with American thinkers Charles Sanders Peirce and John Dewey, foreshadowed by the creator of the “umwelt” concept (which, by the way, is equivalent to the filter), zoologist Jakob von Uexküll, and extended upon by the man behind “affordances” (equivalent to input categories), psychologist James J. Gibson. Uexküll’s approach is known as “biosemiotics”, which regards environmental invariances as signs that an organism interprets. Meanwhile, Gibson’s expertise was in visual perception, where his “ecological approach” compared to a radio that “tunes into” signals in the milieu – an image that provides a convenient segue into a related and far more influential approach known as “information theory”.