VI. Confirmatory and Exploratory

We saw with the brain-as-a-Bayesian-statisticians that the number of hypotheses we can entertain is limited, and we saw with Falsificationism that the best hypotheses are those that differentiate between candidate theories. According to science writer Howard Bloom, for intelligent behavior to emerge, a system does not only need “conformity enforcers” (to coordinate the system), “inner judges” (to test hypotheses), “resource shifters” (to reward successful hypotheses) and “intergroup tournaments” (to ensure that adaptations benefit the entire system), but also “diversity generators” – we must make sure that new hypotheses continually are generated.

In the brain as in science (scientists have brains), this can be thought of as a random, combinatorial play. Activity spreads stochastically with varying degrees of constraint through neural networks to find ideas to associate, and scientists are similarly, via their social environment, exposed to random ideas they encode in their own neural networks. Institutionally, to safe-guard against theoretical blindness and confirmation bias, it is therefore encouraged within science to maintain a free market of ideas and always conceive of alternative explanations in an article’s end discussion.

Different research methods vary along a continuum in how constrained the observational filter is. In “qualitative” research, such as interviews, the prior probabilities are weak, and hypotheses emerge over time as promising leads are picked up and shadowy hunches in the minds of the researchers are gradually reinforced. This exploratory, data-driven, “bottom-up” kind of research is necessary in the absence of robust theories. But when we do have a high-prior hypotheses available, we may test these using quantitative methods, such as experiments, which by comparison are deductive, confirmatory and “top-down”. The process can be described as a a “parallel terraced scan”.

"Parallel terraced scan" describes a positive feedback-mechanism believed to play a role in human perception, in which a space of possible hypotheses is explored in parallel, but resources are allocated in proportion to how promising each seems.

 

If we need to scrap the hypothesis following expensive, focused experimentation, we may have to revert to square one, to the cheap and unfocused information processing of qualitative research. Just like how a brain’s attention can be concentrated or vigilant, science needs both. The important thing, following exploration, is not to double-dip in the same data, since the hypothesis would be selected by virtue of its fitness with that dataset, so to gauge its predictive power, the dataset would have to be fresh.