Towards the close of childhood, there typically comes a moment of realization that our time is finite while the knowledge out there isn’t. It becomes harder and harder to nourish our pet hobbies, and sustain any youthful obsessions with dinosaurs, pirates or Egyptology. Days pass faster, and we realize that, to absorb so much as a droplet of those surging swells of information, we would either have to drill deep into something narrow, or dabble in the shallow expanses for a superficial grasp.
The Earth’s population is estimated to have generated around 8 zettabytes in 2015 (8 trillion gigabytes), a figure that doubles at an annual rate. About 100 million books have been printed so far, with 1 million new books per year. The Library of Congress adds more than 10,000 objects daily. Meanwhile, the growth of scientific productivity is proportionately explosive. Derek John de Solla Price, who founded scientometrics in 1963, pointed out an empirical law that still applies: the scientific literature grows exponentially. There are about 100.000 journals currently in the US and every 15 years, this number is doubled. Every 20 years, so does the number of universities.
More scientists are alive today than have ever been before and they face a life in fierce competition for a decreasing number of postgraduate positions. With all the low-hanging fruit discoveries gone, reaped in the long-gone days of the lone genius, squeezing facts out of reality is today largely a team effort, with scientists networking at international congresses, assembling teams on the fly, and churning out publications in order not to perish.
Meanwhile, the capacities of human cognition remain constant, and cannot keep pace with the spiraling orders of magnitude at which the scholarly output grows. Computational implants may one day become reality, but until then, we are confined to an attentional aperture that on average can keep at most 7 chunks of information in register at a time. The result is an information overload that makes it extremely difficult to keep abreast of research frontiers and coordinate science, with the consequence that progress could hit a sort of carrying capacity plateau: the rate of research may increase, but the rate of progress may not.
What we see is a tremendous waste of resources: repeated wheel-reinvention and the same old ideas re-emerging again and again under new neologisms. Given how replete science history is with examples of multiple independent discovery (evolutionary theory, oxygen, preferential attachment and integral calculus all had several discoverers) the nagging worry of being scooped in some potentially career-enhancing discovery undoubtedly looms large.
Moreover, with the specialization of scholarship, this means that findings on different sides of an academic department wall may never come into contact with each other. One article could contain “A implies B” and another “B implies C”, but the leap that “A implies C” may forever remain latent, because the two ideas will never flicker across the same human cortex.
In order to increase the chance of the right idea hitting the right mind at the right time, information has to be managed. This has many aspects to it, such as standardization of methods and nomenclature for easier communication, and serendipity-supporting apps like StumbleUpon that randomly present content in order to prevent intellectual myopia. But the most important aspect of information management is to organize information in a way that minimizes the effort that the reader must exert in order to understand it, and reduce his computational burden, or “cognitive load”. This is where knowledge visualization comes in.
For a long time, knowledge visualization was dominated by the archetype of knowledge as a tree – which, by dint of its recursive branching schema, represents a hierarchical ordering, called a “taxonomy”. Why does this metaphor feel so natural to us? Of course, we have already discussed hierarchies as a general organizing principle, like Herbert A. Simon’s concept of “near-decomposability”, Arthur Koestler’s “arborization”, as well as the notion of “algorithmic probability”. In the brain, category hierarchies are believed to originate in how the distributed representations of neural networks may overlap. However, non-overlapping networks may still be connected to each other, and to say that human categorization is strictly hierarchical would be a gross over-simplification. Categorization is known to be an extremely complex affair, in which perceivers flexibly re-carve their reality, constructing new ontologies on the fly to serve present goals and contexts. Nevertheless, some knowledge hierarchies are remarkably universal. Anthropologists have, for example, found that preliterate cultures across the world independently of each other have developed a biological classification system of seven levels, just like Linnaeus.
Tree charts, in essence, show unity split into multiplicity. We find them in genealogical kinship portrayals, and in feudally flavored depictions of all things as having a “natural order”, with inanimate minerals at the bottoms and humans (or God) at the top (an idea called scala naturae, or the “Great Chain of Being”, which was first codified by Aristotle and illustrated by Porphyry). Charles Darwin, via his famous notebook sketch from 1837 as well as illustrations in his first Origin of Species edition, made the tree the go-to diagram for evolutionary relationships, which not only shows splitting, but also explains the mechanism that induces speciation. This image was then enduringly popularized by the artwork of German naturalist Ernst Haeckel. Furthermore, “cladistics” – the idea of classifying organisms based on shared characteristics traceable to a most recent common ancestor – derives from the Greek word for “branch”, and our language is full of tree metaphors.
Tree visualization was particularly common in medieval manuscripts, not only because of its allegorical role in the Genesis, but because Christian monks were keen practitioners of the ars memorativa (the art of memory), where trees served as a mnemonic device. By imposing on information what modern eyes would consider as contrived, top-down metaphysical orderings, the knowledge became easier to retrieve. Most prominently, Majorcan polymath Raimundus Lullus published an influential encyclopedia in 1296 called Arbor scientiae, where he mapped out knowledge domains as sixteen trees of science – an image that lives on to this day in phrases like “scientific branches”.
Lullus, interestingly, is also considered a father of computation theory, by virtue of the many hypothetical devices he invented that mechanically aid both the retrieval of old knowledge, and the generation of new knowledge. Among them is the “Lullian circle”, in which discs inscribed with symbols around the circumference, representing elemental truths, could be rotated to generate new combinations. Inspired by Lullus, Gottfried Leibniz imagined a diagrammatic language (characteristica universalis) that represented atomic concepts by pictograms that can form compounds which, by a logical algebraic system (called Calculus ratiocinator) could mechanically be determined as either true or false. It would constitute what he called an “alphabet of human thought”. Clearly, it was understood already in mediaeval times that strict hierarchies may isolate knowledge from combining in fruitful ways.
Although medieval knowledge representations may strike a modern reader as archaic, naïve and a tad megalomaniac, the problem they address – that of information management – is a serious one, and the ontological challenges faced by librarians – to index and anticipate the space of all possible subjects – is of equally epic proportions. Alphabetical indexes have been around since the renaissance, but there is still the issue of how to classify a work by subject. The first widespread standardized cataloging system was devised by Melvil Dewey in 1876 and is called the Dewey Decimal Classification. Like his medieval forerunners, Dewey’s system is hierarchical, composed of ten classes that each is made up of ten subdivisions, with fractional decimals for further detail. However, it also “faceted” in how it allows categories to be combined, and is therefore not restricted to a fixed taxonomy.
Interestingly, Dewey also imagined libraries to base their architectural lay-out on his ontology. This notion has been entertained by many others. Giulio Camillo, a sixteenth century philosopher, wrote a book that described a “Theater of memory” – a seven-tiered amphitheater that one could enter and whose interior was full of boxes that could be opened by a gear apparatus to reveal words and visual metaphors inside. The boxes would arranged in a hierarchical manner of increasing abstraction to show how the facts are conceptually connected. While the theatre was never completed, informatics pioneer Paul Otlet came closer to achieve it with his Mundaneum, established in Brussels 1910. Described by scholars as an “analog World Wide Web”, it was a museum intended as a world encyclopedia, filled with drawers of index cards for every piece of intellectual property in the world. Otlet sought to organize its architecture around a central core of grand organizing principles, marking the unification of all knowledge, with colonnades radiating from it, leading to narrower subdomains.
Dewey, Camillo, and Otlet’s ideas may be regarded as physical realizations of the old Greek mind palace technique (“method of loci”), in which a person, in his mental eye, associates chunks of information with familiar physical locations in a certain order. By exploiting the fact that our spatial memory system is much more powerful than our semantic memory, the summoning of these faux episodic memories makes it easier to retrieve information from our own brains. It is interesting how reasoning about knowledge in general seems near-impossible without invoking spatial metaphor. Concepts appear to inhabit a topography, where they can be close or far away from each other, within-domain or between-domain, narrow or wide in area, high or low in abstraction. It invites notions of exploration and navigation, of salient landmarks and terra incognita, and hints at a possibility of uniting isolated islands and drifted continents into a primordial Pangaea.
The method of loci, though orderly and hierarchical in nature, ultimately works by association – by increasing the number of retrieval cues for the idea you wish to memorize by embedding it in a context that you already have a very rich mental representation of in your neural networks. Our brains may obsess over information compression and squirt dopamine at the prospect of some grand synthesis of ideas, but wherever we look, hierarchies have given way to networks in information storage. For example, the first databases, developed by IBM in the 1960s, had a hierarchical, tree-like structure, such that relationships could only be one-to-many (a child node can only have one parent) so that, to retrieve data, the whole tree would have to be traversed from the root. Today, ontologies in informatics are primarily represented using Entity-Relationship diagrams, of networks of tables called “relational databases”. Similarly, programming languages generally support ways for objects to be represented in faceted, multidimensional ways (e.g. Java’s classes and interfaces). In fact, not even evolution itself can be characterized as a strict tree anymore, given the lateral gene transfer observed in bacteria.
The World Wide Web is a minimally managed platform. While, at its core, it breaks data into smaller units stored on a server, its superstructure fundamentally lacks any hierarchical backbone. Nor does it have bidirectional links, or a single entry point per item, and with its meshwork of hyper-links and cross-references, it is as if many branches lead to the same leaf. By people of the Linked Data movement, this “free-for-all” approach is seen as a virtue: owing to the processing power of modern computers, ontologies could be inferred from the disorderly data itself, so as to programmatically derive themes and categories, instead of ordering it upon insertion. Many databases opt for user-driven classification systems (sometimes called “Folksonomies”), where users could associate materials with open-ended tags, like Twitter hashtags. This way, the data in its stored state would be completely disorganized, and it becomes the responsibility of the retrieval mechanism to structure it. Thus, classification boundaries become dynamic, self-organizing, and the distinction between data and metadata dissolves. Philosopher David Weinberger, in his book “Everything is Miscellaneous” has described it as “Filter on the way out, not on the way in”. For example, Google, a meta-application most of us use for retrieving information, is based on keyword analysis rather than a categorical scheme.
Tim Berners-Lee proposed WWW while working as a researcher at CERN and was motivated by a concern with inefficient communication among scientists. He therefore went on to propose a more centralized, rigid version, called the Semantic Web, which imposed standard models, known as Resource Description Framework (RDF) for encoding data into subject-predicate-object relationships. This facilitates a more dynamic interaction with knowledge as well as the merging of data from different sources in a way that for example Wikipedia – which bases data presentation around the metaphor of a physical page and is meant to be read by humans rather than computers – does not support. Parallel computing expert Danny Hillis has presented a similar scheme for extracting “meaning” from static documents, and an implementation of it called “Freebase” was in 2010 sold to Google, and is used for the Wikipedia-like entries that pop up to the right for certain searches.
As pointed out by writer Alex Wright, the Internet makes increasing use of the “stream” metaphor of ephemeral content rather than static pages, which bears upon the century-old idea of our collectively intelligent knowledge network as a “global brain”, of bits flickering through its veins like a vital fluid. Science fiction writer H. G. Wells conceived of an “all-human cerebellum”, while philosopher Pierre Teilhard de Chardin entertained his somewhat obscure notion “nöosphere”, and popular science writers Ian Stewart and Jack Cohen have a similar concept of “extelligence”.
The analogy of the Internet as a brain may seem a bit too quasi-mystical, but the insights gained from studying how humans have optimized information management outside of the brain – from naïve hierarchies to disorganized networks – may give us important clues as to how it is done inside of the brain. Somehow, in the brain, hierarchies and networks coexist and give rise to each other…
Alex Wright’s Glut (2008) and Cataloging the World (2014) overlap in content and cover the history of information management, with the latter focused on Paul Otlet’s Mundaneum
Manuel Lima’s Book of Trees (2014) is a beautifully curated coffee-table type of book about tree diagrams, from Ancient Mesopotamia to Big Data visualizations.
Katy Börner’s Atlas of Science (2010) and Atlas of Knowledge (2015) are about computer-generated visualizations of scientometric data.
Samuel Arbesman’s The Half-life of facts (2013) is a popular science book about scientometrics