OTOOP Part I.VIII: Bayes and Brains

So far reality has been depicted as a disorderly mist, fractured with a scatter of low-entropy pockets – “systems” – that feed on each other in a swirling, co-adaptive dance towards ever-increasing complexity. We have seen that probability theory, and its role in information theory, represents a powerful body of concepts with which we can begin to formalize how systems interact and learn about each other. But probability theory can be soul-crushingly difficult to grasp, and its mathematical simplicity makes the slippery logic no less aggravating. Let us therefore recapitulate on some probability fundamentals.

“Probability”, we have seen, is a ratio representing the relative frequencies of some states out of all the states possible, for some unpredictable dynamical system (referred to as “random variable”). By “state” we mean a human category of states, since no two states are objectively identical, and the head/tail distinction has significance only for humans. The relative frequencies are normally obtained empirically, through observation, because we usually lack insight into the system’s internal dynamics. Textbook problems that run along the lines of “You have 34 lollipops in a bag, and 12 of them are strawberry-flavored…” are therefore misleadingly contrived. In real life, we rarely get to peek inside the bag. Instead, the internal logic of the system dynamics cryptically causes random initial conditions to converge on certain long-term states, and these “attractors” will be relatively more frequent in our data.

Caught in the present, an observer’s predicament is to predict what the random variable’s future state will be. Its best guess must be based on these obtained probability-values. However, often an observer may be curious about the probability of some complex event, and this is where basic probability theorems enter the equation. The observer could, for example, collapse several outcome states into a single category (or conjunctions) and wonder about its relative frequency. This event is known as a disjunction and amounts to the probability of event A OR event B occurring. The relevant probabilities depend crucially on whether the systems lurking behind the variables are assumed to be interdependent:

Independent

Conditional

The fact that a hypothesized predictor “co-occurs” with an outcome can be potentially confusing. Normally, a hypothesis precedes the data in time. However, if you think of them as co-occurring in short-term memory, where the predictor is still present, they are symmetrical while still distinct. The conjunction P(hypothesis AND data) is therefore equal to P(data AND hypothesis) – a property known as “commutativity” – while P(data|hypothesis) does usually not equal P(hypothesis|data). The ratio of the party nights followed by a hangover does not equal that of hangovers followed by party nights, for example. So while having the same mathematical status, the two – termed “likelihood” and “posterior probability”, respectively – vary greatly in their usefulness. Moreover, likelihood is often much easier to obtain than posterior probability, since it is based only on observed data, but using the following trick that exploits the commutativity of conjunction, a posterior probability can be calculated based on the prior (let hypothesis be A and data be B):

Bayes

The equation is known as Bayes’ theorem. Intuitively, Bayes’ theorem tells us how to interpret evidence in the context of previous knowledge. “Likelihood” corresponds to plausibility – how well the evidence fits the hypothesis – but to estimate the probability of a hypothesis, plausibility must be weighted by the base rate of the hypothesis. It allows us to reason backwards from measurements, to find the most probable cause of the data.

For example, in a noisy room, the observer may hear a sequence of words, constituting acoustic data, which he is uncertain about whether to interpret as “por mi chica” or “pour me chicken”. “Pour me chicken” is, considered on its own, more plausible, for 80% of the times you have encountered someone meaning that, it has sounded just like that. However, living as you are in a Spanish-speaking country, among all the phrases you have heard, “por mi chica” is considerably more frequent. Therefore, it will have a higher posterior probability, and you are wise to opt for this interpretation.

Iterated Bayesian inference constitutes learning. For example, the virtue of the cryptographic “one-time pad” mentioned earlier is that nothing is learned, since the prior and posterior probabilities are equal. Difference between prior and posterior means that the probability-value has been revised, that the event had self-information, and that something has been learned.

Digital implementations of Bayesian inference is what allows software such as Siri, Google Translate and Amazon’s recommender system – as studied in disciplines like “robotics”, “machine learning” and “artificial intelligence” – to modify themselves as a result of past interactions, and, from the comfort of their silicon shells, infer systematic structures with which they have never been in direct contact. Given its impressive achievements in these fields, Bayesian inference provides a sort of “existence proof” that complex behavior can emerge from physical systems, thanks to the richness of incoming data, without the aid of some mystical force. This makes it extremely attractive to argue that it constitutes something of a grand organizational principle with which we can begin to understand how the brain and complex adaptive systems generally model their environment in order to maintain minimal entropy and minimal surprise.

One Bayes-centered account is the “predictive coding” approach in computational neuroscience. According to this approach, the many hundreds of sensory megabits that at every waking moment flow into the brain from peripheral receptors are compared with sensory states that the brain has predicted to receive as input, based on the long-term models that it has stored in synaptic weights. The goal is to maximize the correlation (i.e. mutual information) between internal state and external state, so that, for example, the firing pattern for “apple” only occurs when an actual apple is in sight. To achieve this, the brain must assess and minimize the discrepancy between sensory data and its own predictions. The image is one of an almost Darwinian struggle between candidate hypotheses that are evaluated both based on their fitness (i.e. how small the discrepancy is) and how well they have fared historically, so as to continually update their posterior probabilities.

PredictiveCoding

The perceptual system is known to be hierarchical in structure, to mirror the multi-layered reality it is meant to model. Causal invariances in external reality occur at different spatio-temporal frequencies. Some are long-term, like the seasons; some are mid-term, like how the irritable behavior of a friend tends to foreshadow a tantrum. Others are almost instantaneous, like how a certain pattern of light would change if you tilt your head. Sensory cortex therefore has distinct levels of processing that vary in their grain and extent, to bind together an internal representation bottom-up, all the way from individual receptor signals, via basic geometric features, to complex conceptual categories. What the “predictive coding” approach argues, based on evidence of feedback connections between the levels, is that multiple competing hypotheses simultaneously cascade in the opposite direction: from a vague, general gist of the situation, to fine-grain predictions about what low-level attributes will be perceived in particular areas of the optical field. Then, at each level, the sensed state is compared with the state predicted by a higher, slower-changing level, and this discrepancy gives rise to a feedback signal called “prediction error”.

Candidate hypotheses vary in their prior probabilities, which weigh the corresponding error signals. After Bayesian calculations have taken place, the best-performing hypothesis provides an error signal to the level above it, thus feeding a posterior probability into the likelihood-value higher up, influencing what hypothesis will perform best there. If the error-signal climbs through many levels, indicating that high-level predictions have been far off the mark, then the hypothesis will be changed globally, but if it gains only a little, only local, short-term hypotheses will have to respond and be updated. Finally, in order to actually recognize the stimulus, the high-level source of the winning hypotheses – those with the highest posterior probability and the smallest error-signal – would have to be identified and determine perceptual content.

Presumably, once a plausible model emerges, competing models are inhibited via lateral connections through some form of positive feedback mechanism. This may be illustrated with Douglas Hofstadter’s concept of “parallel terraced scan”, which he used in a computer program able to flexibly draw analogies between letter strings like ABC and FGH. “Alphabetical consecutiveness” is one of many basic concepts (high-level hypotheses) in a space of possible connections to explore. First the whole space of potential pathways is randomly explored cheaply and unfocusedly. As you collect probes you use this information to assess how promising they seem, and allocate a proportionate amount of resources, so that successive stages are increasingly focused and computationally expensive, making it “terraced” (or hierarchical). At no point, however, do you neglect exploring other possibilities: the path you chose can, after all, turn out to be a dead end. If any of the other paths is found sufficiently promising, it will compete with the current viewpoint, and may ultimately override the positive feedback of the first.  Hofstadter uses the metaphor of an ant colony: scout ants make random forays in the forest, reporting to the chief ant how strong the scent of food is, so the chief allocates more scouts in that direction but makes sure that some scouts continue to wander around unconcerned, shall the path later be proven fruitless.

parallel terraced scan

In order to judge how much fitting is enough for a model to be reliable, the brain must also store data about the input variability of different situations, compute variability of incoming data and continually revise prior expectations about precision. For example, the brain should hold apparent data regularities from cocktail-parties to lower standards than data regularities from a silent room, and not let highly variable data revise first-order hypotheses. In other words, error signals are weighted by their reliability. The result is a regress of statistics-about-statistics, a hierarchy of higher-order statistics, where, for example, the prediction error signal in the second level would be difference in expected prediction error. If incoming data is precise, it is deemed reliable, strengthening the prediction error signal. However, when faced with imprecise, ambiguous data, perceptual inference falls back on prior knowledge: our perception becomes “theory-laden” and may fall victim for the well-documented “confirmation bias” where what we see is determined by our anticipations, and where we become more attentive to evidence that confirm our preconceptions.

The “predictive coding” theory only makes claims about unconscious processing. Cognitive scientists call this “System 1”. It operates automatically and involuntarily, as opposed to “System 2”, which is conscious, effortful, but capable of more complex cognition. System 1 provides impressions and intuitions to System 2, but their different applications of Bayesian logic do not always agree. Perhaps because its evolutionary benefits outweigh its costs (either that, or due “evolutionary noise”), this “bug” in our pattern-recognition apparatus makes us over-sensitive to the presence of patterns. For example, in what is known as “availability bias”, System 1 encodes the frequency of events based on how salient they are, causing System 2 to over-estimate the probability of terrorist attacks as a result of their vividness and disproportionate media reporting. If two events co-occur on very memorable occasions, this bias may also lead us to conclude that there is a causal relationship between them, and preferentially seek evidence that confirms this. Across science, this poses a real threat to the ambition to model reality in the predictively most powerful way. Explanations in disciplines where experimental data are hard to obtain, such as politics and history, are at particular risk of confusing noise with cause, of reading symbolism into ink-blots.

Biases

Our reliance on salience makes our conscious selves poor Bayesians. We systematically fail to take prior probabilities into consideration when evaluating probability. In what is known as the “representativeness fallacy”, we judge the probability of a stereotypical nerd being a computer scientist as high, even when informed about how rare computer scientists are relative to social scientists. People with paranoid schizophrenia are particularly deficient at Bayesian reasoning, having something like an extreme form of confirmation bias. The evidence they hyper-selectively attend to may fit their theory that they are part of a CIA conspiracy, but a non-delusional individual can recognize how the base rate for conspiracies is almost non-existent compared with that of schizophrenia.

Finally, psychologists have demonstrated that humans have a very poor intuitive understanding of randomness. We struggle to comprehend the fact that HHHHH is just as likely an outcome of five coin tosses as HTHTT. This seems to be the source of a whole range of biases impacting our decision-making. We think mega-rich people like Steve Jobs are smarter than those with failed start-ups, and that famous actors are more talented than the thousands of equally deserving drama school-graduates now serving tables, when fortune plays an enormous role in our fates. The human brain’s own sensitivity to random factors tends to make expert judgments, like those of clinicians, inferior to those by simple but noise-resistant statistical algorithms. And we have a self-serving tendency, known as the “fundamental attribution bias”, to ignore external and random factors when explaining negative behaviors by others, wrongly attributing the behavior to durable personality traits, while we readily blame circumstances when it’s our turn to misbehave.

Perhaps a more sober worldview is that of people as grains of pollen, drunkenly walking through life’s joys and hardships, in a Brownian zig-zag between celebratory, champagne-fueled highs of “I’m awesome!”, and alcohol-drowned lows of sorrow and self-blame – all due to outcomes for which we have no credit or responsibility to take. Sometimes randomness conspires in our favor, sometimes against us – in the former, we are wise to stay hard-to-impress but humble about our own contributions, and in the latter, to forgive and believe the best about ourselves and others.

Advertisements
Posted in Okategoriserade | Leave a comment

OTOOP Part I.VII: Randomness, Probability, Compression and Redundancy

“Order” and “disorder”, we have seen, are observer-dependent categories of a dynamical system’s state space. What characterizes disordered states, relative to any observer, is that different disordered states do not differ in any meaningful way. “Disorder” is one single input category, whereas order could span many different input categories. In it, a change would not be detectible, whereas in a low-entropy configuration any rearrangement is highly noticeable. A gas could be ordered in many different ways – packed in corners, concentrated in the middle – but the macrostates meeting this criterion are vastly outnumbered, making an observer more sensitive to changes in it. Meanwhile, if the observer system has seen a high-entropy configuration, he has seen them all. This is seen in how it is easier to draw a cloud than a portrait – even a child can draw a somewhat realistic cloud, but when drawing a portrait, the slightest failure in proportion can distort the face beyond recognition, because a face has lower entropy than a cloud.

If all of the information is invisible – if an observer’s knowledge of any particle is zero and entropy is maximal – then we call the system “random”, or, alternatively, “stochastic”. The concept is made clearer if we consider a bit-string. If completely random, each bit has a 50-50 probability – it is equally unpredictable and uncorrelated as the previous one.  Typically, but not necessarily, the string appears disordered and dull, making details inconsequential. By contrast, if the sequence were highly structured, say 101010101010…, where patterns result from the generating system, then a shuffling would cause a meaningful qualitative change. A generating process where each new output is independent of the previous, like the pseudo-random algorithms used in computers, can produce apparently structured strings like 10111010…, but its entropy rate is 1 bit per character.

Snowball

This highlights an important point. For the same reason that absence of evidence is not the same as evidence of absence (principle that in philosophy is known as the “problem of induction”, associated with philosopher David Hume), we can prove a complete string to be nonrandom simply by successfully compressing it, but we cannot prove a string to be random, that is, to be generated by process where each new output is independent of the previous. The digits of pi are notoriously pattern-less, but they are far from random. In fact, even the random number-generator algorithms are deterministic.

In practice, we rarely have insight into the generating mechanism. This link may be illustrated by a thought experiment. Suppose a human is told that a bag contains one white and one black ball, but that one ball has been removed and put in a box. Lacking any information, the chance that this ball is black is the same as it being white. However, when allowed to peek into the bag and see that the ball inside to be white, this bit of information instantly makes the observer certain the ball in the box is black, without anything metaphysically extraordinary taking place, and by returning the ball to the bag and repeating the experiment, a random process has easily been created.

Lacking insight into the generating mechanism, what an observer system can do is to record the frequencies of observed outputs, and use relative frequency as a basis for quantifying his uncertainty regarding the next-coming output – a theoretical construct known as probability, represented as a number between 0 and 1. The probabilities of different outputs, of different input categories, constitute the probability distribution of a random variable. As new data come in, this is continually updated. Philosophers of the frequentist persuasion argue that this is an approximation of a “true” distribution (a parameter), and that the notion of randomness only truly applies to processes in which an infinite number of trials yield each possible outcome equally often.

Probability

Systems theory provides an alternative explanation for the existence of stable, relative frequencies. Systems – which variables represent – vary in their dynamics. If a dynamical organization is robust, it is insensitive to surrounding disorder, and very many of its possible initial states eventually result in the same state (a feature known as an “attractor”). If an initial state is randomly selected (by non-systemic dynamics), then the chance of an event depends on the number of ways that it can be produced.

For example, in a well-organized bus company, the buses are more likely to arrive on time despite the many random disturbances that the company may face. Similarly, when throwing two die, achieving a total of 10 is more likely than one of 9, partly because you are a reliable counter and because 10 can occur in more ways than 9. Real-life outcomes are composites of systematic and random factors. For this reason an extra-ordinary event – such as a stunning performance by a player in a football match – is likely to be succeeded by a more normal one, as the random component “selects” a state that is more frequent in the phase space (a phenomenon known as “regression towards the mean”). Freak events are freaky because there are so few sets of circumstances that support them among all circumstances possible. Only in some systems, of the trivially discrete kind, does it really make sense to speak of approaching some true parameter.

Brownian

Stable relative frequencies, therefore, are partly due to the sameness of system dynamics, partly due to how the random factors that determine system state are orderly when aggregated. Brownian motion provides a vivid mental image of this. It refers to the jagged, haphazard wandering, sometimes termed “the drunkard’s walk”, of pollen grains dissolved in a liquid. When first observed, scientists were unsure of whether this was evidence of some stable mechanism that made a pollen grain’s trajectory predictable. Albert Einstein, however, published a paper in 1905 where he explained the walk as caused by the stochastic bumping by water molecules, which mostly cancel out, but sometimes, by chance, are lop-sided so as to induce a change in direction. The totally structure-less nature of the high-entropy water-molecule motion makes the path on small scales unpredictable, but en masse, it gains exquisite statistical invariance.  A histogram of distances from their mid-plane will, for example, have a normal distribution, and the walk will almost certainly not be a straight line. Inability to do so therefore remains a reliable rule-of-thumb to determine whether your friend has had one drink too many.

Recall that when previously we quantified information, we only considered scenarios in which outcomes had equal probability. But what happens to the information content of a coin toss if the coin is biased? Clearly, if it necessarily produces head, it resolves no uncertainty, and it contains zero bits. A data source definitely generating a long source of 1’s also has an entropy of zero bits. Similarly, when you engage in a conversation and expect a reply to something you just said, you unconsciously have statistical knowledge of what word is likely to succeed the current word, for there are inter-word dependencies. “Barking up the wrong…” has a high probability to be followed by “tree”. Like a 1 in a string of only 1’s, the word “tree” should therefore have lower information than, say, “dog-owner”. And if a message source is considered as random with equal probabilities even though it outputs apparently structured data, e.g. 10101010…, then each new character is 1 bit of information, though intuitively the next bit is completely uncertain. How can we make quantitative sense of this?

Well, first of all, each event can be said to possess self-information – an information content that is determined by how improbable it is. The more surprising, the more self-information. This is not the same as entropy. A biased coin has between 0 and 1 bits in entropy, but if tail comes up despite having a probability of 1/6, its self-information is much higher than that of head. Specifically, to maintain additive properties (i.e. the amount of surprise of two events is the sum of their individual amounts of surprise), self-information has defined as I(w)=log(1/P(W)). This is why “information” in normal parlance, seen from the observer’s point of view, means “knowledge”, whereas for an information theorist, concerned with the microstates, it means “uncertainty”. The two meanings are compatible, because we learn more from surprise than we do from fulfilled expectations.

Given the estimated probability distribution of the different outcomes for a random variable {p1,p2,p3…}, what is the minimum number of bits required to specify the state of the system? We may return to the weather-report example. If the 20 different weathers have different probabilities, then the self-information of more frequent weathers would be lower, while the self-information of very improbable weathers approaches 5. Clearly, we still require 5 bits to distinguish fully between all possible outcomes, but how about the average number of bits required? If you think about it, if you encode each possible outcome with a bit-string whose length is proportional to the outcome’s self-information, then this average (p1*log(1/ p1) + p2*log(1/p2)…)  equals the expected value of self-information, and it is also more specifically what is meant by “Shannon entropy”.

Shannon, working at the big US telephone company Bell Labs, observed that if you let less frequent messages be encoded in expensive ways (more bits) and more frequent messages in cheap ways (shorter strings), in other words let message length be proportional to the improbability of its occurrence, you may compress a message in a distortion-free (“lossless”) way that would save the corporation enormous amounts of money. Since the letter “e” is the most common, it would be efficient in terms of communication channel capacity to represent it with a shorter string. The Morse code and languages in general are built on the same principle. The word “the” in English is short due to its high probability of occurring.

Kolmogorov

In effect, entropy defines the boundary of the most lossless “compression” possible. As we have seen, for a uniform probability distribution, the entropy (minimum number of bits) equals the number of outcomes we need to distinguish between. This makes an equiprobable random variable characteristically “incompressible”. However, a finite-length sequence, even though randomly generated, may be ordered. Consider, for example, a 1000-character string with the pattern “101010…”. Its Shannon entropy is maximal, but would intuitively it feels very compressible. To provide for this, there is a measure known as “Kolmogorov complexity”, which is independent of any probability model, and defined as the shortest computer program that outputs the sequence. Writing a loop that prints out “10” 500 times is shorter than the actual string. A maximally complex string is thus one where its shortest representation is its own complete, explicit printout.

Other compressions are lossy. For example, computer images may be stored as JPEG-files, whichuse a scheme that involves “chroma subsampling” and “transform coding” which exploit the fact that, because human eyes are less sensitive to variation in color than luminance, color detail can be imperceptibly reduced by averaging out blocks of color and allocating more bandwidth to the brightness component. This method usually causes a loss in information, since the decoded output is not identical to the input, which is sometimes seen in visible distortions.

To be compressible is to contain “information redundancies” that compression serves to reduce. Redundancies do usually not imply waste. They are used to counterbalance noise and equivocation (the disappearance of data) to ensure fidelity in transmission. The genetic code has redundancies in how, for example, several codons code for the same amino acid, and this prevents errors in DNA from translating into deformed proteins. Spoken and written English has redundancies in how we can communicate even in a nightclub with loud music, and how “c” typically is followed by “k” so often that we could replace “ck” by a single letter without any loss of meaning.

Redundancies

Languages belong to a class of systems that are called “ergodic”. This means that their statistical properties are uniform throughout the process (“e” is most common in English, almost regardless of the length of your sample). However, different languages vary in their entropy, and English’ is comparatively low. This means that the space that a cryptographer needs to explore in order to decipher an encrypted message is reduced. For this reason, the only cryptographic protocol known to be unbreakable in principle is the so-called “one-time pad”, in which the secret key must have maximal entropy – the next digit should be completely unpredictable given the previous one – and it must be as long as the message itself, and only used once. As a result, the “one-time pad” is worthless, because any method for distributing the key could itself be used to share the message!

Just like how languages save longer codes for infrequent messages, our nervous system saves expensive processes for infrequent sensations. Surprise is expensive to mediate, so we would rather save firing for improbable events, letting “non-firing” as a default map onto the probable state of affairs. As we habituate to stimuli, we fire less. To prevent over-firing, we are numbed to disorderly systems, and experience them as dull and uninteresting. Entropy is a useful concept precisely because it links microstate properties to input categories of the beholder. Somehow or perceptual system is attuned to these pockets of virtual, hologram-like phase spaces projected onto external dynamics.

Our numbness to randomly behaving time-varying quantities is perhaps most obvious in the case of sound. What physicists mean by “noise” is not statistical error, but sound sequences that sound the same at any playing speed. Human voices clearly don’t, since speeding them up makes them sound like Donald Duck. Mathematically, the process is said to be scale-free or fractal. This is seen in how the power spectrum, which described how the average behavior (e.g. variation in loudness) varies with frequency, for noise is proportional to inverse powers of the frequency, symbolized as f-a. If you look at the high-frequency part, the graph has the same statistical properties as the low-frequency part.

Random sound sequences are a specific type of noise known as white noise for which a=0. Different a-values mean different amounts of correlation. White noise is what you get if you let a random number-generator select every note. They are equally hissing and featureless at all frequencies – the signal is serially uncorrelated, which at low intensities tends to have a calming effect on a human listener. The universality in musical aesthetics appears to be partly explained by us having evolved optimal sound sensitivity to sound sequences with particular power-spectral properties, with just the right amount of surprise and confirmation. Analyses on classical Western music has indicated that these lie between a=1 (“pink noise”) a=2 (“brown noise”), reflecting preferences in correlations, and ultimately the grain and extent in boundaries of the human sound-processing system (and potential reasons for this preference will be explored later and are very fascinating).

dynamic

Because order itself is improbable, it has high self-information in the context of a generally chaotic dynamic. We don’t expect to find order in nature. In order to reduce information-processing load, an umwelt is under pressure to sensitize the organism to low-probability events, both in order to take pre-emptive action, and because information redundancies are opportunities for re-designing reality. Our fixation with order in a Universe that otherwise succumbs to the 2nd law of thermodynamics previously led “vitalists” to attribute it to mystical forces and divine assistance. Life, a marvel of order, seemed like a miraculously local counter-current in an aging, self-undermining sea of erosion and decay, where order spreads and evens out into a lukewarm soup of uselessness.

The loophole in the 2nd law exploited by life to maintain non-equilibrium is that order can arise at the cost of more disorder elsewhere. For example, minutes after the Big Bang, the Universe was filled with gas of staggeringly low entropy. Because of its high density, every region of gas pulled on every other, causing orderly clumps to form but heat to increase (increasing entropy). The heat (made nuclear processes arise), like the stellar fusion of carbon, which, under the orderly sunlight, plants can use to produce well-ordered organic compounds. Animals digest these compounds to maintain internal order, powering semipermeable membranes that hinder free diffusion through energy-consuming pumps and filters, thereby manipulating concentrations of molecules and catalyzing reactions that are normally too improbable to occur. By feeding on negative entropy from its supportive environmental conditions, it consumes their availability of potential to do mechanical work, at the cost of dissipating heat and increasing entropy in the larger thermodynamic context.

Posted in Okategoriserade | 1 Comment

OTOOP Part I.VI: Thermodynamic Entropy

Shannon’s measure of information is actually known as “entropy”, a word better known from thermodynamics, whose famous second law states that, in a closed system, it always increases to a maximum. For example, when we place a cool object next to something warm – heat orderly packaged into one but not the other – but always end up with two objects of equalized temperature. The distribution is evened out and becomes unable to do anything interesting. Heat flowing upwards can drive a turbine or push a piston to lift a weight, but if you do not supply more heat, the system soon dies, and we no longer experience “order”, whatever this elusive property is.

19th century statistical mechanic Ludwig Boltzmann’s formula for thermodynamic entropy, S = k * logW, is actually mathematically equivalent to Shannon’s, and on advice from computing pioneer John von Neumann, Shannon chose the same name as a self-conscious analogy with it – a decision that has led to considerable confusion as to wherein their deep relationship resides. Here, the equivalence will be explained via the concepts of “visible”, “invisible” and “mutual” bits.

Recall the trade-off between extent and grain at the boundary filter of any entity. For a quantitative variable, “extent” refers to the range of values a system can register, and “grain” to the size of the smallest distinguishable entity, in other words its precision (number of significant figures). Dividing range by precision equals the number of values that a system can distinguish between. This number represents the amount of available information, which, depending on context, may be seen as either syntactic, semantic, or pragmatic. Syntactically, a thermometer reading of 31.56 degrees, for example, has a finer grain and therefore more bits than a friend screaming “30-ish”.

If the potential sun-bather relies on a less precise friend, we may say that more of the information in the over-arching system (the bits required to define the detailed microscopic state of people living with a variable atmosphere) becomes invisible. The bits that describe the temperature with infinite precision operate cryptically beneath a blur. The bits describing the coarse-grained, collective, statistical properties extracted by the system nested inside of it – the temperature registered by the sun-bather – meanwhile become visible, in that it is used to distinguish between different brain states (that of “30-ish”) in the human observer.

Thermodynamic

Consider how, at microscopic levels, the molecules in a gas behave myopically in a Newtonian fashion, and for a moment assume the viewpoint of such a molecule. Each molecule may be regarded as a system in its simple right, and while gas molecules may not evolve biologically, the formation of molecules from atoms nevertheless involves feedback and a “survival of the stablest”, and therefore possess an umwelt as they register the position and velocity of colliding molecules to “calculate” their future trajectory.

Now, zoom out hierarchy-style and take the perspective of an external, human observer. To obtain data about the position and velocity of each gas molecule is a practical impossibility, but suppose you have managed to extract these for one molecule. For simplicity, regard the molecule as a single bit whose state we know to be 0. This bit interacts with an unknown bit, such that the unknown bit’s state depends on the known bit. An example of such a bit-flipping rule would be “If the known bit is 1, then flip the second bit”. As a result, the bits become correlated, with a state of either (0,0) or (1,1), although we don’t know which of these.

After interacting with an unknown bit, each bit now has one bit of uncertainty, and because of the correlation the two bits also share one bit of uncertainty. Adding their individual bits and subtracting the shared bits gives us a quantity known as mutual information. As a consequence, the total information content – the sum of both the invisible and visible component – remains constant, but our bits of ignorance of the system has spread – the visible information has decreased – and will continue to do so, almost like an epidemic. The collision could, of course, be reversed and thus reduce our uncertainty, but assuming molecular chaos any two molecules colliding again will effectively be uncorrelated.

The idea of total information as a conserved quantity is known as “Landauer’s principle”. For a gas molecule, being part of an ordered gas where, say, the particles are concentrated in a corner, means that its micro-state encodes fewer past collisions and registers fewer bits. For this molecule, there is less uncertainty as to the microstate of another gas molecule about to collide with it. The particle carries little information, because there is little uncertainty to resolve. There is little invisible information in the gas. For an external observer, meanwhile, an ordered gas means we have less uncertainty as to the whereabouts of a particular molecule (it is somewhere in the corner): there is a lot of visible information. You may inflate a gas container so that molecules inside it will register fewer bits, but only at the expense of making molecules outside of it register more bits. If you add heat, there will be more possible microstates, and the state description would require more bits, however adding heat means that the system no longer is closed (so the principle is not violated).

Maxwell

Consider a thought experiment by Scottish physicist James Clerk Maxwell, modified from one by French mathematician Pierre-Simon Laplace, in which a demon sorts gas molecules into a “faster-than-average” and “slower-than-average” compartment of a gas chamber so as to convert disorganized energy into ordered energy, thereby breaking the 2nd law of thermodynamics. What makes such a perpetual-motion machine impossible? Well, in order to sort, a demon would have to process information- to compute. Computation is necessarily physical – it requires energy and releases heat, so the entropy bill is duly paid.

Landauer’s principle is, however, perhaps best illustrated by the heat emitted from your laptop – to erase a bit, the electrons in the capacitor realizing the bit must be discharged, and this causes a change in temperature that preserves the bit you tried to delete. In the overall system (environment + laptop) there is thus no ambiguity about the initial states, and if your cooling fan does not work properly, this becomes painfully palpable. One way to express this is that the classical laws of physics are one-to-one maps: each input state has only one output state, and vice versa. The classical universe is deterministic, or computationally reversible. (Later we will see that, depending on interpretation, quantum chance may be said to be an exception to Landauer’s principle and determinism, so as to inject new, fresh bits into the Universe.)

But how is the invariance of total information content compatible with the 2nd law of thermodynamics? Thermodynamic entropy corresponds to the invisible component, the microscopic jigglings that, as time goes on, increases as interactions with unknown values cause an external observer’s initial certainty about a molecule to be washed away.

To reinforce the equivalence of entropy-as-disorder with invisible information, imagine a phase space representing all the possible macro-states of a gas chamber (by “macro-state” we mean a complete description of all particles’ micro-states, i.e. their positions and velocities). For an external observer in a state of total ignorance, every macro-state has the same probability. However, if gas has just been released in one of the chamber’s corners, then the number of likely macro-states is dramatically reduced: the observer knows it will be in the macro-state category of “majority of particles are in the upper-right corner”. As molecules slavishly whiz around in their Newton-dictated ways, the mutual information will increase as the position and velocity values encode more and more previous encounters, converting visible information into invisible information, with the number of likely macro-states increasing as a result.

Molecules are thus inexorably driven to explore more space, with an overwhelming tendency to spread out, evolving into a highly disordered configuration.  While it is not impossible for a gas to unmix itself into two compartments, the relative frequency of macro-states that do so is vanishingly small, and the probability is effectively zero. It becomes useful to partition the phase space of macrostates is into the categories “order” and “disorder”, and the latter is what corresponds to the concept of higher observer uncertainty and more entropy, that is, more invisible information

Posted in Okategoriserade | Leave a comment

OTOOP Part I.V: Basic Information Theory

Most psychologists studying perception and cognition today argue that Gibson’s radio-metaphor is flawed because a brain, unlike a radio, identifies a “signal” not directly, but in a memory-dependent way: a brain learns over time, via structural changes, what is significant and what isn’t, so the radio’s “tuning” is determined by a history of interactions. However, if a hierarchy theoretical approach is taken, Gibson is somewhat vindicated, since at the levels of neurons and molecules, “perception” becomes more and more direct, since these smaller systems “anticipate” much simpler environmental features to change their state. But the greatest virtue of Gibson’s metaphor is in its recognition that what leaves a lasting, structural trace in a system is other systems. Interaction with non-systems – with dynamics that lack pattern and repetition – will not be reinforced, and consequently cancelled out. A system’s predicament can be likened to that of detecting a signal through a noisy channel, regardless of whether there is an intelligent producer of that signal.

Gibson

The visual system’s task to create a stable internal model of some perceived distant object (“distal stimulus”) despite the fact that the optical input (“proximal stimulus”) never is the same twice, can be compared with a telegraph wire-designer’s job is to ensure that interfering disturbances do not admix with a sent message (i.e. distal stimulus) to the extent that the received version (i.e. proximal stimulus) becomes incomprehensible and therefore meaningless. The task, essentially, is one of distinguishing between possible scenarios, between input categories. If the telegraph communication is about a weather forecast, the recipient wants to know unambiguously whether it will be rain or sunshine. But like how an object is “signaled” to the perceiving organism via the three dimensions of reflected light, a telegraph message is communicated via Morse code, so the preservation of the rain/sunshine message ultimately is a matter of preserving much simpler physical distinctions – those between short and long signals across the wire. If noise prevents short and long signals from being distinguished, the original message cannot be reconstructed.

We may cast the issue formally in the following way:

  • Any system that specifies a medium so as to indirectly specify the state another system’s state may be considered a “sender”, or “message source”. The “recipient” considers it as a “random variable” – a cluster of possible outcomes, one of which will occur, but the recipient is unsure of which. The system, therefore, is probabilistic.
  • For now, suppose there are 20 different types of weather and that the messages are equiprobable, in the sense that the recipient has encountered each message equally often, so that all possible messages have a 1/20 probability.
  • Represent each distinct outcome by a number as an identifier. This could be arbitrary, for example, “hail” could be 12. The recipient system knows about this label, and is able to disambiguate between outcomes using it.
  • We could represent such distinctions by a base-10 system, since we find this counting system very natural as a consequence of being born with 10 fingers, but disregarding such human-centric quirks, the objectively simplest way is to use the base-2 system, which uses the same principles, but with only two numerals: 0 and 1.
  • This way, outcomes involving complex stimuli are reduced to a binary sequence of atomic distinctions, of bits. Two bits enable decision among four equally likely outcomes, three bits eight outcomes. The number of bits required is equivalent to the number of yes/no questions required to determine what the message is, out of all possible messages, and we will require 5 bits for our purposes.
  • Given a number n of equiprobable possible outcomes that needs be distinguished, the number of bits required to encode it is determined by log2(n) and this constitutes the “information content” of each message. The number of bits that a medium can preserve, in other words its maximum possible transmission rate, is likewise its “information channel capacity”. (Note that in the illustration below, the last two bits are correlated, and therefore only count as one)

Bits

So if you ever wondered what a “bit” is, now you know. A bit can be thought of as the key that unlocks the answer to a binary query. It was the philosopher Gregory Bateson who defined a bit as “a difference that makes a difference”, the smallest physical difference distinguishable by a system. To resolve the uncertainty of a bit – to have it collapse into either one state or the other – is to convey “information”. Paradoxically, this means that the amount of information is determined by the initial absence of content, where information is the decrease in data deficit. The numerals of 0 and 1 could be replaced by “no”/”yes”, “false”/”true” or “head”/”tails” as we see fit. It could be realized neurally, electrically, and in an indefinite number of other ways. It could also refer to any level in the hierarchy: if there only two types of weather, the rain/sun distinction would constitute a bit as well. Some philosophers therefore find it useful to divide data into primary data or “dedomena” (“data in the wild”, cognitively unprocessed lack of uniformity in the world), that of “secondary data” (the registered differences) and “derivative data” (lack of uniformity between two signals, for example between two averages in statistical analysis).

A related, useful classification of information is the following, by mathematician Warren Weaver:

  • A computer file, has a certain bit-content defined by the number of registers (memory units) it occupies. This includes bits that encode not only letters and symbols, but also format-specifications like font-size and text-alignment, which a document reader uses to determine how to color the pixels on your computer screen. All this may be referred to as “syntactic information”. The meaning that a human may invest in it is irrelevant. Once rendered, it could be absolute gibberish – what matters is that all the distinctions in its physical state are preserved.
  • When a human reads the document, which happens to be a research report, and its written symbols trigger brain activity patterns that correspond to those of the document’s author, the kilobytes of the file may serve to resolve bits of uncertainty in what ideas the document embodies. These bits are the “semantic information”.
  • For a human who is already aware of these findings, the document brings nothing new to the table – it does not permit the reader to make any more adaptive decisions than he already could – and therefore resolves no uncertainty. Its “pragmatic information” content is zero.

DifferentInformation

An analogous example of syntactic, semantic and pragmatic information is that of a dog smelling food. Features of the odorant’s molecular structure are used to disambiguate between rotten and edible meat, but because the dog’s stomach is already full, this bit is not very meaningful anyway, and cannot be said to exist. This highlights the relativistic nature of information: depending on measuring instrument and interpreter, a physical difference can convey different numbers of bits. In normal parlance, “information” connotes wisdom, meaning, useful chunks of knowledge and a rather ethereal quality of “aboutness”. This is because we normally use the word with reference to our own umwelt in which a construction manual is “uninformative” if it is written in a language we don’t speak, and a textbook is “meaningless” if it covers nothing on what we will be examined on.

As already pointed out, our everyday concept of “meaning” is typically just applied to systems that have been under selective pressure to inform, and in this sense intends to inform. Humans may have discovered a correlation between clouds and rain, but clouds did not evolve for the sake of telling you to seek shelter or bring an umbrella. “Meaning” feels much more natural when applied to man-made symbolic systems like weather forecasts and poetry, or instinctive semiotic systems like courtship rituals and facial expressions. Before they were deciphered, hieroglyphs was obviously information, but its meaning was buried so deep into the linguistic expressions that a Rosetta-stone was needed to excavate it. The litmus test between information and meaning can therefore be said to be whether or not it makes sense to speak of misrepresentation.

It is actually common, in the history of science, for a concept initially to be regarded as a kind of ethereal substance, and to be later clarified in relational terms. Our brains, it seems, wish to cling onto the conviction that a notion represents something concrete for as long as it can. During the 19th century, heat was thought of as a fluid, energy was for example conceived of as intangible stuff, an “élan vital” or “ether” that by mysterious means invigorated machines and bodies, before it was re-framed as the quantity that is conserved but transformed across processes of induced change. Similarly, “information” is often thought of as a disembodied commodity that can be transferred, stored, and sold, endowed with a mystical “aboutness”. It would take until the publishing of Claude Shannon’s mathematical theory of communication in 1948 for it to be quantitatively defined, providing solutions to engineering problems on which our globalized culture critically depends.

Shannon’s theory dealt with messages that may have different probabilities, and we will discuss that later, but suffice it now to remember that information is therefore not something intrinsic, but something relational. Defining it as the resolution of uncertainty, as the context-determined distinguishability between two states, means that we can measure the information capacity of any physical substrate given the number of distinguishable outcomes it can support. If the micro-states are clear-cut, mutually exclusive, and uncorrelated to each other, then it really is just a matter of counting the number of possible macrostates, and taking its logarithm.

A computer is a powerful tool precisely because it affords us so many crisp, physical differences that may be flexibly assigned reference, and be physically manipulated as a stand-in, or simulation, for some physical process that humans find useful. In a computer, bits are stored in “capacitors”, microscopic buckets that hold electrons. A capacitor has an umwelt of two categories: that of zero voltage, which gives it no excess electrons (it registers 0), and that of non-zero voltage, where lots of excess electrons represents the registration of 1. The nervous system, too, is powerful precisely because it has lots of simple systems that distinguish between two different micro-states, which collectively can organize adaptive responses to the environment that they represent. And with four different bases, DNA may not be binary, but it makes excellent sense to speak of this linear string as “coding for” proteins and traits, in how, given a normal chemical environment and ecological backdrop, changes DNA will reliably correlate with changes in the latter. Who, then, is the programmer of DNA? The message source is systematic changes in the eco-system, which, unlike non-systemic fluctuations, are persistent enough to influence the gene pool and inform about what genetic changes are appropriate.

Posted in Okategoriserade | Leave a comment

OTOOP Part I.IV: A Patchwork of Metaphor

The language used so far may have seemed too removed from the number-crunching chores of experimental science to be anything else than a whimsical indulgence. We have waxed wishy-washy of measuring devices as “perceiving” and “responding”, of the environment as “sending signals”, and of “filters” and “categories” as pursuing some virtual, transcendental existence everywhere we look. By the backdoor, we have smuggled in a whole ensemble of precisely the kind of dim anthropomorphisms and dodgy metaphors that we are normally advised to steer clear of. But to become aware of relations more abstract than our basic-level categories, in other words to extend our umwelts, comparison with concrete relations already in our umwelt may be the only method we have available. And what is metaphor but the act of understanding one thing in terms of another?

Concrete analogs rarely capture more than a few aspects of the abstract pattern that we wish to comprehend, but they constitute bundles of logic that may be mentally combined, twisted and tinkered with until they grasp the abstract pattern as a whole. A cluster of sprawling analogs are like a clanging machinery that careful thought can hammer into harmony. The patchwork that results possesses none of the effortless flow of the fabric of reality itself, seamlessly sweeping past our heavy-handed attempts at describing it. However, what analogy we choose is still not arbitrary – “signal” and “filter” are, for example, more appropriate than “guitar” and “refrigerator” – and this non-arbitrariness makes analogy a flawed but irreplaceable wellspring of new understandings. The choice of analogy influences what hypotheses we choose to test, and how we interpret experimental results. Therefore, let us hammer the analogies introduced so far into a coherent whole, and explore how more rigorous quantitative methods – like information theory and probability theory – can be derived from them, and provide their hazy hunches about the Universe with some number-crunching credibility.

Scientificmetaphor

To begin with, there is that low-key piece of metaphor suggesting that the dynamic of the Universe is organized into bounded entities called “systems”. You won’t find a scientist today who does not make regular use of this concept in his work, but you will be hard pressed to find one eager to elaborate on why reality seems populated by systems when nothing says it must. While systems could be quite unproblematically defined as “changing sources of observations” – covering things as diverse as atoms, the Earth’s atmosphere, the bacterial population in a petri dish, and human organizations – these sources would have to maintain some degree of self-similarity over time in order to be continuously observed, and the definition does nothing to explain the origin of this self-similarity.

In some cases, it could be externally imposed, such as a chemical system insulated by the walls of a beaker, but other systems, like a rain storm or the human episodic memory, can be repeatedly measured without being spatially well-defined. It carries the intriguing implication that some dynamics somehow are capable of sustaining themselves for long periods of time, while other wisps of activity soon dissolve out of failure to achieve this degree of individuality. There is something in the architecture of their interactions, in how their components combine and constrain each other, that affords these patterns of change stability even as the substances they are made up of are constantly replaced. And whatever this something is, it is what provides a way for us to partition nature into features and aspects, from the solid, massy entities we take as unambiguously real, to the tenuous statistical patterns that keep scientists awake at night.

complicity

Part of this something is touched upon by the concept of “feedback”, borrowed from self-regulating artifacts like sound amplifiers and thermostats. Simple feedback guides planetary orbits and mechanical systems by continually plugging in the results of Newton’s equations into the same equations, forming a never-ending loop. These systems tend to be orderly to the point of boringness. Myriads of such simple components can, however, if supplied with energy, interact to realize positive feedback, as seen in turbulent flow, in which large eddies break up in a hierarchy of ever-smaller eddies, until energy can dissipate through molecular diffusion. Such systems are thermodynamically open, “chaotic”, and not particularly long-lived.

A much more robust class of systems is that of “complex adaptive systems”, which includes ant colonies, the immune system, and human economies, and are dynamically organized so as to learn. Negative feedback, in which the system architecture is continually updated via trial-and-error, allows them to accommodate to past disturbances, form an internal schema of their environments, and use this schema to take pre-emptive action against predicted threats, thereby maintaining its non-equilibrium state. In the context of biology, this mechanism is familiar as “homeostasis”. Homeostasis is what distinguishes a living organism from a dead one – a corpse is at thermodynamic equilibrium with its environment, while a living one is poised at a state particularly suited for avoiding becoming a corpse and to remain a system.

Homeostasis

The deceptively straight-forward notion of a “system” therefore hides a lot of ambiguity and unanswered questions. We have already discussed qualitative accounts of the emergence of systems, like how the existence of structure may be explained by how spontaneously formed boundaries are inherently likely to survive, and therefore also to spontaneously combine into “near-decomposable” hierarchies. Theorists working in the fields of computer science, complexity science, and network science are only beginning to develop a vocabulary to characterize these issues with.

To maintain self-similarity, an adaptive system would have to be bounded in the sense that it has evolved an interface that causes a detected perturbation to induce a dynamical modification – a learned response. It is hard not to imagine this interface as some spectral kind of membrane, with a slippery texture that hovers between a liquid and solid state so as to cautiously accommodate external pressure without succumbing to it. The interface can be said to have a filter – a set of categories of perturbations that it has adapted to, is responsive to, and without which its current structure would make no sense. Its filter is therefore not like a simple fishing net with only one type of hole that restricts what kind of fish it may catch. Rather, it classifies disturbances into input categories in order to respond with what its past has taught it to be a relevant action.

Categories

Reconsider the flower-watering device from last section. Even though this system is an artifact, it can be said to have evolved via adaptation and natural selection, for it is a result of an analogous iteration of trial and error which went into its conception and construction. Its structure is a consequence of environmental constraints – including the problem-solving goal of its inventor, cost considerations and basic physical restrictions – as if it were molded by an invisible hand. In the survival of the fittest, if the device would fail to satisfy its intended function – to control what flower to water as a function of mass – it would be dismantled, and some other clever design would take over its niche. This way, natural selection has defined a filter, here composed of the six different weight ranges – the set of input categories that it has prepared responses for – which support its continued existence and consequently make the categories meaningful. Note that the filter is a small subset of all possible influences, and only includes other systems that were significant during its formative context (the device does not behave meaningfully in response to, say, a laser ray, because there was no selective pressure for it). This makes the filter analogous to the domain – the set of valid inputs – for a mathematical function f(x).

We could think of the system as projecting a space of possibilities upon reality, of categories of scenarios that could happen. While a human observer, also a system, may describe the input stone along an unlimited number of sensory dimensions, like color, texture, and shape, as far as the flowers’ destiny is concerned, the only property that matters is the stone’s mass. This mass could in principle be specified to an arbitrary number of decimals, but for the flowers, only how the mass relates to category boundary conditions is of any significance. If no flowers were present, then input/output categories would be a continuum without meaningful discrete boundaries. Likewise, in the absence of a human observer, the placement of a stone on the scale would be indistinguishable from any other physical interaction, as though a written letter were just ink transferred from pen to paper. In short: for there to be an input, there must be something there to care about it. Natural selection, in the expanded sense used here, is what infuses structure with an element of intentionality. It is what determines what systems support the survival of another system, and what confers the abstract thing that humans have found it useful to refer to as “meaning”.

In the philosophical literature this idea is known as “pragmatism”, according to which physical boundaries are statistical discontinuities that are made significant by other systems who learn about them and come to depend on the predictability that they afford. It is an idea associated with American thinkers Charles Sanders Peirce and John Dewey, foreshadowed by the creator of the “umwelt” concept (which, by the way, is equivalent to the filter), zoologist Jakob von Uexküll, and extended upon by the man behind “affordances” (equivalent to input categories), psychologist James J. Gibson. Uexküll’s approach is known as “biosemiotics”, which regards environmental invariances as signs that an organism interprets. Meanwhile, Gibson’s expertise was in visual perception, where his “ecological approach” compared to a radio that “tunes into” signals in the milieu – an image that provides a convenient segue into a related and far more influential approach known as “information theory”.

Posted in Okategoriserade | Leave a comment

OTOOP Part I.III: Umwelts, Affordances, and Measurements

Umwelt

The idea that any system can be said to have its own ontology – its own distinctive way of carving up reality – means that its structure implicitly, as a mechanical consequence, and without consciously reflecting on it, distinguishes between “inputs” and couples these to certain “outputs”. This coupling is an adaptation to circumstances that prevailed during its evolutionary past, in which the system interacted with other systems. Similarities and differences in how these stimuli disturbed the system and could be offset by the system’s own responses were thus gradually engraved in its structure, which silently and obliviously came to embody categories of input and output.

CategoriesByStructure

Because these category demarcations contribute to the system’s stability and survival, they are useful. They are results of patterns in the environment that have impinged on the holon, and therefore reflect historically reliable features of its environment. But there is nothing ethereal about the demarcations that guarantees that they reflect features infinitely reliably into the future. One day, the system may wake up to discover that the environmental feature no longer behaves as predicted, and that the action it has coupled to this input consequently is maladaptive. From this feedback it may adjust the category boundaries, which as a result are fuzzy and fluid at the edges.

Sometimes, however, the patterns of the environment are so consistent and the categories so crisp so as to mislead the holon into experiencing them as absolute and God-given. The brain connectivity of metaphysicists represent fuzzy categories that link sensations to actions, but the illusory absoluteness of these boundaries lead them to conclude that they are perfect representations of external reality. They therefore coined the concept of “realness”, and the discipline of “ontology” for discussions about whether an entity possessed this property or not. Unfortunately, we can only know of things because our brains engage in complicity, so “realness” is an unattainable abstraction. However, in terms of serving the system, some category boundaries may still be more appropriate than others. “Realness”, therefore, is actually “reliability” masquerading under a more pompous moniker.

This is made more concrete when we consider genetic evolution – the kind of complicity that is mediated by nucleic acids – though the same reasoning would apply to any system that is capable of adapting. In the phase space of evolution, as with complicity in general, there is no such thing as global maximum-point. Species evolve only as far as selection pressures force them to evolve, and not towards phenotypes superior to all conceivable forms of competition (the species is only ever exposed to an infinitesimal subset of these). Eventually, to protect it from competition, the species settles into a niche – a particular lifestyle – by evolving specialized structures for detecting environmental features and efficiently linking these to actions. An organism is not under pressure to represent all the features of reality, only the ones that threaten its current standing in relation to other species, which themselves are subject to change. Evolution is an inherently fickle game that keeps on changing its own rules for what is a winning strategy, and the input-output couplings of an organism is the strategy that has been sufficient for survival until the present.

Humans have, for example, unlike bats never been under pressure to evolve echolocation and therefore cannot pick up air-compression waves. Humans have also done just fine despite that the range of electromagnetic waves we are receptive to is truncated to a modest 390-700 nm, less than a ten-trillionth of the actual spectrum, while snakes and insects can see infrared and ultraviolet light, respectively. Nor have ever found it high-priority to develop a sensitivity to the odor of butyric acid, which the tick uses to locate the sebaceous follicles on animals. However, because our ecological niche makes us dependent on subtle social interaction for our survival, we have an innate sensitivity to human faces, which makes us experts at recognizing people and their emotions. In short, our category boundaries are tuned to features of the environment that have adaptive significance to us and are relevant to our niche.

HolonFiltering

To capture the fact that different organisms – from ticks to metaphysicists – pick up different signals and assume these to constitute the objective reality, the biologist Jakob von Uexküll introduced the concept of an “umwelt” – the animal’s subjective universe that it normally never seeks beyond. It carries the rather humbling implication that the vast, unimagined majority of all there is goes undetected. We cannot conceive of a reality with echolocation any more than a blind person can conceive of a reality with light. Organisms may share an ecosystem, but their theories about the world may be unrecognizable from one another. Not even apparently transcendental properties like space and time are immune from this. For example, simple organisms have no way of identifying distant objects – their spatial “awareness” is limited to that mapped directly on its own body. Complex animals, meanwhile, can model their environment in 3-D, and as a result perceive space as a volume. As for time, a human second corresponds to something like 10 fly seconds: a light-bulb flickers, and a human palm targeting a fly is like a fast-running car it may calmly make way for. Similarly, the speed of thought may be dazzling for a human thinker, but for the neurons instantiating it, it is like a slow-moving bureaucracy.

As a more subtle example of how our perception of reality is intimately tied to how we interact with it, consider how we tend to experience the level of abstraction that we interact with the most – the “basic level categories” – as the most “real” ones. Basic level categories tend to have a distinctive gestalt, and the most features in common. In experiments, these are the categorizations we make most readily and developmentally they enter the lexicon first. For example, “chair” feels like a more natural and real category than the more general “furniture” or the more specific “kitchen chair” because “chair” is the highest level at which we interact with something in roughly the same way, and therefore are the most salient ones in our umwelt.

BasicLevelCategories (2)

Specifically, it makes sense for an organism’s umwelt to be dominated by objects and relations that furnish opportunities for performing an action to attain its niche-specific goals, things that have functional meaning. We may think of umwelt as a computer interface, and evolution is like an interface-designer, trying to make widgets that enable user actions as discoverable as possible. This idea has been labelled “affordance” by psychologist James J. Gibson. A staircase is an affordance for a human, and figures as a concept in his subjective reality, but it is useless for an insect, and therefore not part of whatever conceptual repertoire an insect can be said to possess. Visual attention also seems to be partly affordance-governed: if you track eye-movements of a person engaged in some task, you will find that they are focused on corners, knobs and handles – loci of potential manipulation. Perception is strongly action-oriented.

CatVisualAttention

I’m sure you would agree that it would be awkwardly anthropocentric to give entities that are visible or salient to humans any sort of privileged ontological status, and it would be equally unfair to dismiss things like “atoms” and “concepts” as unreal just because they are intangible and unobservable with the naked eye. Equally valid are those entities experienced by other umwelts, as well as all potential umwelts that have not yet evolved and never will. However, humans are special in that they may extend their umwelts to include new patterns by other means than evolution. Humans possess language, which permits them to associate perceived patterns with arbitrary symbols. An effect of this dual coding is that patterns are more easily retrieved, contemplated and communicated, and by expanding our vocabulary, our sensory systems becomes sensitized to increasingly abstract features of our environment, making our categories more and more fine-grained. We may, for example, train ourselves to discriminate among fine wines from their earthly aromas, or cultivate our intellects with high-brow terms like “cubism”, “proxy war”, and “umwelt”.

The inanimate-animate continuum of hierarchy theory prompts us to be extremely generous in ascribing subjective realities to holons, so let’s go as far as considering man-made devices to have umwelts. Our own umwelts are most dramatically expanded by the development of measurement technologies, which range from telescopes to litmus paper and psychometric scales. A measurement is a three-component interaction, in which one system (the physical variable) stimulates another system (the instrument), so that it causes the latter to respond in way that a human experimenter can interpret as a numerical value (an umwelt-category) and use this for comparisons that reveal patterns between samples, which are meaningful for manipulating the world, turning it into an affordance.

Measurement

Just like our evolutionary past has honed our photoreceptors to reliably correlate their firing to specific light frequencies, the development of measuring instruments is a kind of quasi-evolutionary process of painstakingly adjusting the instrument’s umwelt and response so that the same input quantity consistently produces the same output quantity across time and space. Such reliability-testing gives us confidence to assume that the change we may find when we compare the measurements of two different samples reflects an actual difference. However, the instrument, though consistent, may link the same input quantity to a different output quantity if, for example, it is imperfectly calibrated for one of the samples, making such a comparison meaningless. For example, it wouldn’t be very meaningful for a color-sighted and a color-blind person to argue about whether or not a traffic light turned green. Such a measurement bias is known as “systematic error” and shows the subtlety involved in establishing that a data pattern correlates to an invisible pattern that we may have to incorporate into our umwelt.

 

Because of the statistical origins of both our biological perception system and their artefactual extensions, measurement may reduce our uncertainty about the realness of some pattern, but never entirely eliminate it. After removing systematic error sources, there will still be fluctuations (“random error”) caused by factors that the measuring instrument has not been under pressure to keep track of. Things like temperature, light, and quantum events may cause the obtained value to deviate from its “actual” value, giving this abstract construct a hovering, virtual existence. Except in trivially discrete examples like counting the rabbits in front of you, there is just no such thing as a perfect measurement, so to unearth systems in objective reality, statisticians will have to collect a whole set of measurements for the same thing, average them so as to represent this set with a single score, and then calculate how much it varies (a value known as “standard deviation”) to estimate how much uncertainty this mean value glosses over. The secrets that reality whispers via its imprint on an umwelt are sometimes spoken in a voice so feathery that it is drowned by surrounding noises, making holons implicitly perform statistical calculations.

Statistics

Another source of measurement uncertainty is that we can quantify continuous things like length and mass only to a finite level of precision. The physical quantities stored in vaults that previously defined standard units are themselves uncertain and in flux. In fact, even when basing quantities on physical constants our knowledge is limited to a certain number of significant figures. Although we in principle could express our height in the passport in terms of Angstroms (one billionth of a meter), as measured by an atomic force microscope, we usually find recording differences more fine-grained than a centimeter superfluous, and instead use a meterstick. By attending to the smallest differences, we risk missing the forest for the trees.

This reflects fundamental trade-offs in how system boundaries filter physical differences: those of “grain” (spatiotemporal resolution) and “extent” (scope). With fine-grained categories, the smallest distinguishable entity becomes smaller, but we also become worse at detecting patterns at coarser scales. For example, by blurring an image, you reduce its pixel-content, causing fine patterns go lost, but more abstract patterns to be enhanced. By zooming in on an image, you shift what becomes a focal entity and the undifferentiated background, in other words its extent. As a consequence, you can move across scales, from holon to holon. A system, such as an experimenter, may extend its umwelt to include a new external system only if it filters it with the right degree of acuity.

GrainAndExtent

In conclusion, given any adaptive system, there will be self-organized categories of interactions whose boundaries depend on how they affect the system’s survival, and together they constitute an ontology. Perhaps the only metaphysical fact we can state with some semblance of confidence is that reality contains differences, some of which make a difference.

Posted in Okategoriserade | Leave a comment

OTOOP Part I.II: Back to Hierarchy Theory

Hierarchy

Ontology rests on the idea that reality is not perfectly uniform – that it has discontinuities. To develop a sense of these “contours”, hierarchy theory provides a crude and attractive account that makes for a good beginning. It may seem unscientific, but it efficiently summarizes many of the concepts that scientists shamelessly and consistently use in their thoughts and writings, and form a kind of tacit metaphysical schema without which science would fall apart. We may therefore indulge in it with good conscience.

Hierarchy theory pictured our reality as one bathing in a primeval soup of pixels, structure-less at its finest scales, but rippled by the occasional whirl of activity resilient enough to persist over time. Such resilience sometimes emerges when elements interact with each other at a much higher rate than with others, and, given appropriately tuned observer criteria, it translates to a discontinuity in the fabric of reality. It is helpful to think of such contained dynamic as a “system” – an entity with a well-defined boundary – but dangerous to forget that reality is nowhere near as neatly partitioned as we believe it to be.

When one whirl encounters another one, they threaten to dissolve each other and to both be engulfed by the surrounding chaos. To avoid this, they agree to modify each other, to adapt, in a bid to retain at least some semblance of their former selves. As a consequence of this co-evolutionary process of mutual modification (called “complicity” by some) they merge into a super-whirl that from future encounters can proceed to build recursively upon itself. The previous system boundaries are not erased, but fossilized in how the super-whirl is recognizably multi-levelled. We may dub this tendency “arborization”, and the horizontal interaction across such systems “reticulation”. And so what crystallizes in this arborizing brew, when left to seethe, is an underwater coral reef of interlocking hierarchies, not entirely decomposable (but nearly so) and teeming with desire to self-complicate

An important feature of this imagery is that any discontinuity, any “system”, is simultaneously a part and a whole – equally dependent participants in a higher-order relation and self-contained entities themselves. Arthur Koestler, in his classic work “The Ghost in the Machine”, called these two-faced nodes of near-decomposable hierarchies “holons”. Through a parts-within-parts architecture, the superordinate holon imposes negative constraints on behavior, while the activity of subordinate holons drives positive self-assertive behavior. The result is a kind of polarity between centripetal and centrifugal tendencies that underpins the coordination into complex action patterns.

Through their history of mutual modification, each holon has evolved an interface that specifies what behavior will follow a certain kind of interaction. We may view this as signals instructing a slot machine to respond in a certain manner, or as a rule determining whether an event will take place, which evolutionary feedback has made sure sustains both superordinate and subordinate holons. Like a combination of locks in descending order, an input signal in the first holon triggers an input signal to an internal holon, thus unleashing a cascade of sub-unit activation down to the simplest parts, from which intelligent and adaptive behavior emerges.

Signal boundary

The most obvious examples of hierarchy are those that are structurally nested, with small entities that, to reap the benefits of symbiosis, join others into larger conglomerations. For example, according to the endosymbiotic theory, organelles like mitochondria were originally free-living bacteria that combined with other prokaryotic cells to form eukaryotic cells. Another example is how anthropologists view humans to have undergone three distinct stages of organizational complexity, from hunter-gatherer bands, via chiefdoms, to states. Nested hierarchies therefore evolve from specific to general.

Nested

But hierarchies need not be nested and system boundaries may be more subtle than cell membranes and house walls: in what is called the “multiplier effect”, a general system evolves into a more specialized one. For example, a biochemical cycle may use one single enzyme (rule) to accelerate the conversion of a protein (signal) based on whether it fits the active site (signal tag). It is very slow, so it starts experimenting with parallel, intermediate stages in which one enzyme makes a few changes, then ensures that the product fits the active site of another enzyme (etc.). An analogous example is the drive for specialization in economic markets. The root node of the hierarchy may be that “humans need shoes”, but production becomes more efficient if it is decomposed into “sole-maker 1 needs leather from producer 1 and cloth from producer 2” and “lace-maker 2 needs thread from producer 3 and plastic from producer 4”.  The shoemaker needs then only deal with the sole-maker and the lace-maker, and this arrangement lowers production costs for everyone involved.

Non-nested

Through the lens of hierarchy theory, what distinguishes the inert and inanimate from the living and self-propelled is their hierarchical depth – the extent to which they have managed to adapt to challenges posed by other systems, and thus increased its complexity. An atom or a crystal lattice have relatively low depth, and accomplish nothing impressive singlehandedly as a result. Meanwhile, living organisms, the paragons of complexity, exhibit baffling degrees of freedom. This complexity is a result of the replication of DNA and associated machinery, which guarantee that a “sameness” is maintained over time, since by virtue of being imperfect replicators, they agree to incorporate random errors via natural selection. The key to system longevity and complexity thus is adaptation. But the most important insight carried by hierarchy theory is that the continuum between simple systems and complex systems – between rocks and organisms – makes ontology applicable to them all. Each holon can be said to have its own ontology, its own set of “real” categories!

References:

Arthur Koestler’s “The Ghost in the Machine” (1967)

Valerie Ahl’s and TFH Allen’s “Hierarchy Theory – A Vision, Vocabulary, and Epistemology” (1996)

Herbert S. Simon’s “The Architecture of Complexity” (1962)

John H. Holland’s “Signals and Boundaries: Building Blocks for Complex Adaptive Systems” (2014)

Posted in Okategoriserade | Leave a comment