The research behind the first room of the Menimagerie – **the Entropical Conservatory** – was very labor-intensive and very eclectic. The idea was to start with the impressionistic ideas of hierarchy theory and see how it underlies information theory, probability theory, selection effects and statistics. As promised in the beginning, it therefore starts with foregrounding the connective tissue of all these concepts, before going into mathematical detail. I personally think this is the most satisfying way of teaching things. People vary in their need for cognitive closure – some people are happy to study a statistics course by learning the procedures as isolated recipes – but I think I speak on behalf of many others that working with a method that we have a very fragile, shallow understanding of is very uncomfortable, and probably ineffective. I couldn’t find in a single textbook an explanation of *why *normal distributions are so common (explanation: selection effects) or why the standard deviation takes the square root *after *division, and I was very lucky to stumble upon books that opened my eyes to Bayesian and likelihood analysis. In this section, I list (incompletely) the books that informed the previous section.

Information Theory

Philosopher Andy Clark explains “umwelt” and “affordance” beautifully in the books *Being There* (1998) and *Mindware* (2000).

James Gleick’s *The Information *(2012) is, at 500+ pages, *very *informative, but mostly concerns the history behind information theory, with a lot of biographical portrayals of the key figures. A plus is that it takes a very abstract view of it, for example starting with explaining ancient communication systems like African tribal drums.

John R. Pierce’s *An Introduction to Information Theory *(1980) is the go-to when it comes to conceptual explanation.

Vlatko Vedral’s *Decoding Reality *(2012) and Seth Lloyd’s *Programming the Universe *(2007) are both popular science books about quantum computing, but they are also the clearest explanations of information that I have found. Charles Petzold’s *Code *(2000) is about computation, but also very nice and concept-oriented.

Paul Davies and Niels Henrik Gregersen-edited *Information and the Nature of Reality *(2014) is a collection of philosophy essays on information that was extremely illuminating.

John D Barrow’s *The Artful Universe *(1996) explains noise.

Bayes and Probability Theory

For light introductions to probability theory, see Leonard Mlodinow’s *The Drunkard’s Walk *(2009) and John Heigh’s *Probability: A Very Short Introduction *(2012).

Bayes’ theorem is explained most clearly in James V Stone’s *Bayes’ Rule *(2013).

Predictive coding is explained non-technically in Jakob Hohwy’s *Predictive Mind *(2013). **OBS: Andy Clark just came out with a book about this called “Surfing Uncertainty” that I have not yet read.**

Base-rate fallacy is explained best by the man who discovered it: Daniel Kahneman’s *Thinking, Fast and Slow *(2012).

Philosophy of Science and Statistics

John D Barrow’s *Theories of Everything *(2008) explains selection effects in cosmology, while his *Pi in the Sky *(1993) explains it for the philosophy of mathematics.

Max Tegmark’s *Our Mathematical Universe *(2015) is probably the most digestible book about quantum theory and the Many-Worlds interpretation. He also happens to be Swedish, and drops references to things from my home country throughout, which is fun.

Tim Lewens’ *The Meaning of Science *(2015) really extracts the key concepts and present them without obscure jargon.

David Salsburg’s *Lady Tasting Tea *(2002) presents the history of statistics using interesting anecdotes. It is helpful to see just how fraught with conflict statistics is, and that it shouldn’t be seen as something objective or absolute, but as a useful artifact.

David P. Feldman’s *Chaos and Fractals *(2012) explains normal distributions in the best way I have seen.

Jordan Ellenberg’s *How Not to be Wrong *(2015) is an amazingly interesting book about mathematical thinking, which includes chapters about regression to the mean and Bayesian analysis.

Zoltan Dienes’ *Understanding Psychology as a Science *(2008) is an impossibly good book to which this blog is highly indebted. It has very clear introductions to Neyman-Pearson, Bayes and Likelihood. Really, *every *science student should be obliged to read it.

Gerd Gigerenzer’s *Rationality for Mortals *(2010) and *Bounded Rationality *(2002) are marvels of insight. Reading Gigerenzer’s books makes you happy.

Alex Reinhart’s *Statistics Done Wrong *(2015) explains common statistical errors in a non-technical way. Again, everyone should read it.