Too Hot for Psychology Today

article-2128724-128fa522000005dc-36_964x652How should we think about thinking? Is even trying to do this akin to trying to open a box with a crowbar locked inside it? A widely-shared Aeon article from earlier this year got very angry and confused on the whole issue, concluding that the whole of cognitive science was based on a very simple error—that error being that “humans are computers”. (1) Needless to say the author, Robert Epstein, was very stern and sarcastic at the foolishness of the assertion that we are little clockwork toys beeping around mindlessly, and he was at pains to set us all right. Unfortunately, in his eagerness to set the whole of science straight, Epstein showed his misunderstanding of science in general, cognitive science in particular, and the march of history into the bargain.

I’d like to deal with the history part first, because it’s something that lots of people don’t know but that I am lucky enough to benefit from directly. The usual story about the “humans as computers” metaphor goes like this: Humans have always compared thinking to their most impressive technology, hydraulics or clockwork say, computers are just the latest in a long line of this. Every other metaphor has failed, so will this one.

Epstein puts it this way: “In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least. The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning…By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph. Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication of Language and Communication (1951) by the psychologist George Miller.”

Sorry for the long quote—but actually Epstein gives a pretty good survey of the ways that humans have tried to use metaphors to explain human cognition. Good that is, but for one rather glaring omission. Which brings me back to the personal remark at the start. The person missing from this story is the person whose home I can see if I lean out of my office window dangerously far, whose name adorns the lecture halls I teach in and the library I study in. His work, and that of his equally erudite wife, is the major reason for the existence of the machine on which I write this and the reason you can read it. His name is George Boole, and the insights he had—in attempting to analyse all cognition—fully one hundred years before the invention of the physical computer, are the reason computers exist in the first place. (2)

There isn’t the space to go into detail here but it is Boolean algebra—a general way of analysing the grammar of all possible relations between ideas—that enabled the cognitive revolution and the information revolution. You don’t need to read Boole’s Laws of Thought to get why this is important (although, please do). The fact that you are reading this on a computer that relies for its existence on the accuracy of his analysis is the actual pudding whose juicy proof you are eating.

Thus, it is simply, factually inaccurate to claim that modern cognitive science is drawing on the most impressive current technology to explain thought. In term of historical progression, it was Boole’s attempts to fully analyse thought that led to the creation, long after his death, of the technology. It was what made the technology possible in the first place—by making giving cognition functional analysis. Functionalism is the thread that runs through all science. It’s the insight that essentialism is a blind alley and that what something is, is what it does. Functionalism about thought—that minds are what brains do—came a hundred years before the technology that was the triumphant vindication of that insight.

So much for history. But it segues neatly into the second point that Epstein fails to appreciate. Cognitive scientists (unless they are very confused) do not think that human minds are computers. Rather—they think that computers—the physical objects on your desk top say—are just one way to make functions real. Functions are mathematical operations—but we shouldn’t get too hung up on numbers and equations here. You can turn an equation into a physical object yourself and you did it in high school (you called it “making a graph”) and there are many ways to see that functions can be realised in different ways.

Machines in general are ways to make abstracted functions into physical things. And that’s what cognitive scientists study. Functions. E.g. “How do we turn electromagnetic inputs into perceptions?” or “How do our past experiences function to make us wary of similar present dangers?”. The author’s frustration that cognitive scientists keep on thinking this way is simply misplaced. Hilariously he offers what he thinks is an alternative to functional thinking in explaining catching a ball:

“The IP [Information Processing] perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.

That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.”

Oh really? Ok. Build something that does it. I’ll lay a pound to a penny that it’s a lot more difficult than Epstein thinks. He has made the classic mistake (and especially egregious for a psychologist) in thinking that what seems simple to conscious access does not have a wealth of highly complex unconscious processing going on beneath the surface. The brilliant Hans Morovec gave his name to the general error of this kind: Morovec’s paradox. (3)

Morovec asked the question: Why is it that it takes the smartest humans to do things like fly planes, diagnose disease, and play chess; when we can make fairly stupid computers that beat all but the best of humans at this with ease? The other side of this coin is that things that we wrong thought would be computationally easy to programme—like walking up stairs, recognising faces, and so on turned out to be horribly difficult to get computers to do. Why was this? We were making the same mistake as the Epstein—forgetting that evolutionarily novel tasks (like chess) have their computational architecture laid bare. People who think a thing is easy simply haven’t given serious thought to the millions of years that went into making it so. That’s why they need smart people to do things like play chess, because there isn’t that much (in computational terms) to know (and the humans with the biggest brains know it better and faster). As the brilliant AI technician Rob Brooks pointed out, “Elephants don’t play chess”. Learning to be an elephant took millions of years prior to that particular elephant’s existence. (4)

Epstein gets more and more frustrated with the benighted community of cognitive scientists as he goes on. “To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system.”

Or—you could just listen to what that brain is saying? Through its attached mouth, for example? If someone was to object that what I’ve just said in response is a trick—that you don’t know every single thing that that particular brain is doing at any particular moment the appropriate response is: “So what?” I don’t know what every single one of the 1026 sub-atomic particles in my cup of coffee are doing on an individual basis either. But I know what they are doing on the aggregate as I pick up the cup. Because their collective action is called “heat”. And it’s the mean kinetic energy of the molecules in the liquid (e.g. the average amount of whizzing about they are doing—and good luck trying to map those on an individual basis). I don’t need to know everything in order to know anything.

We still have many problems to solve—but they are problems, not mysteries. And there is only one game in town to solve them: Functionalism. The alternative that Epstein offers—essentialism—has gone the way of astrology, alchemy, and homeopathy. And for the exact same reason. Essentialism comes from the pre-science time of humans. It’s magical thinking.

There is a saying that those who think that a task is impossible should not get in the way of those achieving it. The irony is that the opponents of cognitive science live in a world were aeroplanes fly themselves, machines govern investments, and artificial eyes can be spliced into the place of lost ones, directly into nervous systems. The functionalist account of the human brain isn’t something we are predicting. We are living in the midst of it. (5)

—Robert King

1) https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
See also Searle, J. R. (1990). Is the brain’s mind a computer program. Scientific American, 262(1), 26-31.
Other accounts drawing on these metaphor include

Daugman, J. G. (2001). Brain metaphor and brain theory.

de La Mettrie, J. O. (1960). L’Homme Machine, 1748. Man a Machine.

2) Boole, G. (1854). An investigation of the laws of thought: on which are founded the mathematical theories of logic and probabilities. Dover Publications.
I work at UCC and in honor of Boole’s bicentenary his house is being restored.

3) Moravec, H. P. (2000). Robot: Mere machine to transcendent mind. Oxford University Press on Demand

4) Brooks, R. A. (1990). Elephants don’t play chess. Robotics and autonomous systems, 6(1), 3-15.

5) King, R. J. I Can’t Get (no) Boolean Satisfaction (2016). Frontiers in Psychology. http://dx.doi.org/10.3389/fpsyg.2016.01880

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s