The Tractable Cognition Thesis is the proposal that all processes in the brain can be modeled by a polynomial time algorithm. This includes situations where the brain solves problems that are within NP-Complete domains. In the latter situation, it is assumed the brain is only solving a subset of the NP-Complete domain where the problems can be solved with a polynomial time algorithm. With these assumptions in place, the overall implication is that there is a specific polynomial time algorithm that can emulate every process in the brain.
However, there is a gap in the logic when it comes to NP-Complete problems. It is well known that humans solve many problems that are in the general case NP-Complete. Route planning, scheduling, and even grocery shopping are mundane tasks that infringe upon the NP-Complete domain. More fun, many popular games are also NP-Complete, including the very popular Candy Crush.
Of course, it is also apparent, that humans cannot easily solve NP-Complete problems optimally, if at all. Just like computer algorithms, when the problems become of significant size, humans must resort to optimizations. Thus, it appears that insofar as humans find solutions to NP-Complete problems, they must only be dealing with a solution problem tradeoff that is polynomial time.
Even if the tradeoff is polynomial time, this is necessary, but not sufficient, to sustain the Tractable Cognition Thesis. Though the solving of a particular instance involves a polynomial time algorithm, this does not imply the same algorithm can be used across all instances that humans solve in a particular NP-Complete problem domain. If the human is indeed using different algorithms, then even if every individual algorithm runs in polynomial time, the meta problem of finding these algorithms is no longer within the NP-Complete domain. The problem of discovering new algorithms actually belongs in the undecidable category of problems.
Consequently, since finding polynomial time algorithms requires a meta algorithm that operates in the undecidable domain, it becomes increasingly difficult to see the Tractable Cognition Thesis is correct. To support the Tractable Cognition Thesis requires the preexistence of a significant amount of algorithmic mutual information between the human brain and highly abstract problem domains.
At this point, we could appeal to evolution to justify the existence of this algorithmic mutual information. Yet, evolution itself also suffers from the same problem. Evolution, being a computable process, cannot generate more algorithmic mutual information than it already contains.
At this point, we are stuck between a rock and a hard place. There is no justification for merely assuming massive amounts of mutual algorithmic information existing in the brain. Nor is there a plausible natural source of this mutual algorithmic information, since all natural processes are computable, and thus suffer the same problem as evolution. At best, highly abstract problem-solving algorithms may be implicit in natural processes, but even if natural processes exhibit such algorithms, this is not the same as possessing the information about the algorithm in a transmittable form.
Consequently, we must look outside of natural processes for a source for this information. Our first guess might be God, and that would be a good guess. But, this guess also begs the question. If God Himself can create information, then He could just as well create an information creator. A God who can create an information creator is more powerful than a God who can only create information. Thus, echoing Anselm’s Ontological Argument for the existence of a being better than the best thing imaginable, we would likewise say: Since a creator Creator is more powerful than a mere creator, then God must be a creator Creator.
Now we use a modified form of Ockham’s razor, which I call Holloway’s chainsaw. Ockham’s razor says the simplest explanation is best, whereas Holloway’s chainsaw says the most powerful explanation is best. And a more powerful explanation than implicit information for the mind’s ability to deal with the algorithm generation problem is that the human mind can create information. So, given the choice between the implicit information and the information creation hypothesis for the human mind, the information creation hypothesis is favored by Holloway’s chainsaw.
After a loud roaring and woodchips flying, what does the Holloway chainsaw leave us with? It leaves us with the Intractable Cognition Thesis, namely that the human mind solves NP-complete problems in polynomial time by creating information.