Category Archives: Psychology

The Golden Age

“Creating the future is a frightening enterprise, especially when we do it without any awareness of the past. I am amazed how little we actually care to examine past human experience. It’s like hunting in a wood full of bears, ignoring all the disarticulated skeletons of dead hunters, and confidently proclaiming that bears don’t really exist. They belong to the past!”—Joseph Gresham Miller

Lucas_Cranach_the_Elder_-_The_Golden_Age_-_Google_Art_ProjectDo you dream primarily of what is, what once was, what could have been, or what could be? Your answer to this question tells me almost everything I need to know about you. Political conservatives locate their Golden Age somewhere in the not-too-distant past (e.g., the 1950s), whilst religious fundamentalists locate it somewhere in the unsullied early history of their movement (e.g., the Early Church for Pentecostals, the Pious Predecessors for Salafists). Progressives and starry-eyed idealists locate it somewhere in a future purged of the sins of the present, whilst Romantics locate it in a past purged of modernity, a pastoral place that looks a whole lot like The Shire described by J.R.R. Tolkien in The Lord of the Rings. Most environmentalists seem to locate it in some eco-friendly pre-modern past wherein we all lived in happy harmony with sweet Mother Earth. Computer geeks locate it in a shiny future replete with flying cars, robots, and killer apps, whilst defenders of the status quo, apologists of the present like Steven Pinker, insist that we’re living in a Golden Age right now. The outliers, of course, are the pessimists, like Arthur Schopenhauer and St. Augustine, who insist that life in The City of Man has always more or less sucked, and that there has never been, nor will there ever be, a Golden Age.

St. Augustine argues in The City of God that Original Sin has so corrupted human nature and the natural world—with sin, disease, and death—that the reformation of the individual and of society will always, of necessity, have to be a highly circumscribed exercise. All is not possible, insists the Bishop, because the freedom to do good is habitually hemmed in by this-worldly corruption. “The choice of the will,” avers Augustine, “is genuinely free only when it is not subservient to faults and sins.” St. Paul the Apostle likewise believes that decisive victory in the war against sin is not possible in a fallen world; the battle is, instead, fated to rage on and on, even within his body: “I know,” he once lamented, “that in me (that is, in my flesh,) dwelleth no good thing: for to will is present with me; but how to perform that which is good I find not. For the good that I would I do not: but the evil which I would not, that I do” (Romans 7:18-19). Like Paul, Augustine maintains that there are some intractable human problems which the individual and society will have to grapple with again and again, until the end of time. Perfection can be nothing more than a noble goal in The City of Man. Always before us, yet perpetually out of reach. A beacon on the horizon of a fallen world.

—John Faithful Hamer, The Myth of the Fuckbuddy (2017)

Elfwick’s Law

A fortnight back, The Guardian newspaper (1) published a worrying article about the rise of fascism—in its new shiny manifestation, spurred on by various online forums. The sub-headline was worrying enough:

screen-shot-of-guardian

“It started with Sam Harris, moved on to Milo Yiannopoulos and almost led to full-scale Islamophobia. If it can happen to a lifelong liberal, it could happen to anyone”. (2)

It made for grim reading, talking about “cult-like” aspects and flirtations with the far right. The poor author, who had started out as a “normal white liberal”, had been almost brainwashed into the “alt-right” was enveloped in a web of “indoctrination”, but just drew themselves back from the brink because “[D]eep down, I knew I was ashamed of what I was doing…”

Some of us who have followed Sam Harris, and his much-maligned attempts to raise the level of public intellectual debate above the banal and asinine, smelled a rat at the first headline. But, for those unfamiliar with him or his work, there were some not so subtle signals. The brainwashed writer went on: “On one occasion I even, I am ashamed to admit, very diplomatically expressed negative sentiments on Islam to my wife. ‘[W]e should be able to discuss these things without shutting down the conversation by calling people racist, or bigots.’”

(Horrifying indeed!)

Oh dear. Anyone who had not seen the signs by this time had been led up a garden path, one decorated with crazy paving, and bordered by Mad Dog-Weed.

The Guardian had been spoofed.

“I’m not a ‘Grammar-Nazi’, I’m ‘Alt-Write’”…

The article had not come from some anonymous anxious young white man who had just managed to pull himself back from the brink of full-blown Nazi extremism after all. So, where had it come from?

There is a scurrilous (and sometimes hilarious) online troll who calls himself “Godfrey Elfwick” and styles himself on Twitter:

“Genderqueer Muslim atheist. Born white in the #WrongSkin. Itinerant jongleur. Xir, Xirs Xirself. Filters life through the lens of minority issues.”

alt-shite

His account parodies the self-abasing virtue signalling of elements of the far left, and is frequently painful reading for the liberally inclined.

“Elfwick” came forward and admitted that the piece was his. It certainly fits with his normal output, and in the time I’ve been aware of him, this is the first time he’s broken through the fourth wall and come out of character. Some were outraged at his fooling of The Guardian, but I think his example is a reminder of the important role that satire has to play in the modern marketplace of ideas.

The Day the Music Died.

The great satirical songster Tom Lehrer dramatically declared the death of satire on the occasion of awarding of the Nobel Peace Prize to Henry Kissinger. How was he, a mere satirist, to approach ridiculing by parody and extension the awarding of the world’s highest peace honor to a man who ordered the carpet bombing of civilians on Christ’s birthday? As the cliché has it: you couldn’t make it up.

This supposed death of satire was much exaggerated. There is always a role for pushing the boundaries of beliefs into absurdity, and one such is when the bearers of such beliefs seem not to have realized that the absurd is where they have taken up more or less permanent residence. And let’s be specific about what I mean by “absurd” here: It means to have abandoned one’s critical faculties to the extent that one is governed by wishful thinking. And one of the ways this is revealed is that the difference between real and fake no longer matters to you. Talk of post-truth worlds or fake news is hot air. We humans have always been suckers for hearing what we want to hear. Satire has always been one of the cures.

But it’s more than just fun at the expense of the hoaxed. A foundational ability in any discipline must be able to tell the real from the fake. Art experts who praised the “furious fastidiousness” of the brushstrokes of Pierre Brassau (actually Peter, a four-year-old chimp from Boras zoo) confirmed what many of us suspected about modern art expertise. (3) The knowledge that wine experts can be fooled by switching expensive and fake labels casts a lot of their expertise into doubt. (4) In the 1970s, Rosenhan’s classic “Being Sane in Insane Places” study threw the whole of the psychiatric community into disarray; by showing that mental health care professionals of the time couldn’t distinguish real patients from ones who were faking it. (5)

Why can’t the opposition just recognise that they are evil and stupid?

An oft-repeated finding in psychology is that expectation conditions perception. We are notoriously easy to hoax when you give us what they want to see. From the Cottingley Fairies, to the Roswell Alien Autopsy, through the Book of Mormon, to Uri Geller, the history of humanity is a history of people seeing daft things because they wanted to.

This is one reason I advise all my students to study a bit of magic. Not enough to turn pro, but just enough to see how hoaxable we all are. It’s like any self-defence course, although in this case it’s mental self-defence. It’s a humbling experience. Anyone can be blindsided and beaten in a fight. Likewise, any of us can be fooled if someone matches our expectations to their pitch.  Ideally, of course, a good scientist should have no expectations, but scientists are human too. Uri Geller for instance, managed to hoax a number of famous physicists but no magicians.

This is one place where satire comes in. In the 1990s Sokal gloriously hoaxed a post-modernist journal called Social Text. (6) He produced an article of high-sounding gibberish that the editors happily let through to publication as it appeared to speak to their idea that science was just one way of knowing among many. It was filled with supposed physics support for bizarre claims about “physical ‘reality’” [being] fundamentally “a social and linguistic construct” and with needs for a “postmodern science [that] provide[s] powerful intellectual support for the progressive political project”.

1996sokaltext01co-museum-of-hoaxes

When he revealed the hoax, what did the editors do? Remove the article in embarrassment? Sore up their editorial policies? Laugh along? Not a bit of it—they somehow tried to maintain the fiction that this tosh was meaningful all along, losing any opportunity to develop their thinking, if thinking it ever was. After Rosenhan’s study, the field of Psychiatry made a concerted effort to tighten its procedures—resulting in new editions of the Diagnostic and Statistical Manual. Whether it was 100% successful is a different question, but there was an effort to reform in response. But Post Modernism as a field never took this option. Having effectively amputated itself from critical self-reflection, it is now largely moribund, although versions of it still exist to poison efforts at critical reflection in the academy.

Don’t like my opinions of post-modernism? Well, they are true for me…

Now, I’m not claiming that expertise rests on getting it right every time. Expertise does not imply that. But the desire to understand a phenomenon must involve the disciplined attendance to mistakes—so when one is fooled (by nature, colleagues, the maliciously mischievous, or oneself) then one goes back and studies how so it doesn’t happen again. To not do this is to forever live wishfully, rather than authentically attending.

So, what’s the next step? Here’s my suggestion: There are a number of famous Internet laws. Rule 34 is the famous law that somewhere there is a porn version of everything. (7) Godwin’s Law is the tendency over time from all Internet discussions to tend towards an accusation that the opponent is Hitler. An addendum to Godwin’s Law is that the opponent to first yield to the temptation to Hitlerise their opponent automatically loses. (8) Poe’s Law is the rule that any right wing fundamentalist internet site is indistinguishable from a satirical parody of right wing fundamentalist Internet sites. A few minutes on Alex Jones’ will confirm the truth of this. But why should the right wing have it all their own way when it comes to being mocked?

I think we need a new Internet Law to invoke that mirrors Poe’s Law. If a piece of far left virtue signalling cannot be reliably distinguished from a satirical version of it, then this deserves its own nomenclature.

Given his latest achievement I would like to propose the term “Elfwick’s Law” to mark such occasions. If nothing else this would serve as a reminder that descending into parody, and not caring about real or fake, is not the preserve of any political tribe, but is part of common humanity. That’s real equality for you.

—Robert King

elf-meme

References

1) For those not in the UK—The Guardian is a respectable left-leaning broadsheet newspaper.

4) Hodgson, R. T. (2008). An examination of judge reliability at a major US wine competition. Journal of Wine Economics, 3(02), 105-113.

http://www.theatlantic.com/health/archive/2011/10/you-are-not-so-smart-why-we-cant-tell-good-wine-from-bad/247240/

5) Rosenhan, D. K. (1973). On Being Sane In Insane Places. Science, 179, 250-258

A good write-up is here http://www.holah.karoo.net/rosenhanstudy.htm

6) Sokal, A. D. (Ed.). (2000). The Sokal hoax: the sham that shook the academy. U of Nebraska Press.

7) My advice is to never, ever, check on the truth of this.

8) In the light of recent events the use of Godwin’s Law is under judicial review

9) For more details of the Heterodox Academy see: http://heterodoxacademy.org/

((Of course it’s also possible that Godfrey Elfwick is playing some elaborate game of double bluff and I have been fooled along with others. Which would have a touching irony about it! But–let the record show that when respected newspaper (the Guardian) and respected journalist (Glenn Greenwald) were confronted with the hoax accusations their response was to double-down  and, in Greenwald’s case, to insist that truth was not the issue–the piece spoke to a “deeper truth”.

No it doesn’t. Not if it’s false it doesn’t. That’s what true and false mean.
“Elfwick” broke character for the only time I’ve known to share his workings on the hoax the day after and I reproduce them here. Could these also be faked? Well, of course they could but its worth asking –why would he pick this one to lie about? And even if he did–what is going on with a journalist telling the world that mundane sorts of truth (you know, those ones that are actually true) no longer matter? When Harris retweeted a story that turned out to be false he apologized publicly. https://twitter.com/SamHarrisOrg/status/675030323923656704 This is how public debate should be conducted

cydv9rvxeaa79zpcydy9z-xcaafn-w

(Shared via screenshot from Godrey Elfwicks Twitter account on 29/11/2016)

 

Too Hot for Psychology Today

article-2128724-128fa522000005dc-36_964x652How should we think about thinking? Is even trying to do this akin to trying to open a box with a crowbar locked inside it? A widely-shared Aeon article from earlier this year got very angry and confused on the whole issue, concluding that the whole of cognitive science was based on a very simple error—that error being that “humans are computers”. (1) Needless to say the author, Robert Epstein, was very stern and sarcastic at the foolishness of the assertion that we are little clockwork toys beeping around mindlessly, and he was at pains to set us all right. Unfortunately, in his eagerness to set the whole of science straight, Epstein showed his misunderstanding of science in general, cognitive science in particular, and the march of history into the bargain.

I’d like to deal with the history part first, because it’s something that lots of people don’t know but that I am lucky enough to benefit from directly. The usual story about the “humans as computers” metaphor goes like this: Humans have always compared thinking to their most impressive technology, hydraulics or clockwork say, computers are just the latest in a long line of this. Every other metaphor has failed, so will this one.

Epstein puts it this way: “In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least. The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning…By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph. Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication of Language and Communication (1951) by the psychologist George Miller.”

Sorry for the long quote—but actually Epstein gives a pretty good survey of the ways that humans have tried to use metaphors to explain human cognition. Good that is, but for one rather glaring omission. Which brings me back to the personal remark at the start. The person missing from this story is the person whose home I can see if I lean out of my office window dangerously far, whose name adorns the lecture halls I teach in and the library I study in. His work, and that of his equally erudite wife, is the major reason for the existence of the machine on which I write this and the reason you can read it. His name is George Boole, and the insights he had—in attempting to analyse all cognition—fully one hundred years before the invention of the physical computer, are the reason computers exist in the first place. (2)

There isn’t the space to go into detail here but it is Boolean algebra—a general way of analysing the grammar of all possible relations between ideas—that enabled the cognitive revolution and the information revolution. You don’t need to read Boole’s Laws of Thought to get why this is important (although, please do). The fact that you are reading this on a computer that relies for its existence on the accuracy of his analysis is the actual pudding whose juicy proof you are eating.

Thus, it is simply, factually inaccurate to claim that modern cognitive science is drawing on the most impressive current technology to explain thought. In term of historical progression, it was Boole’s attempts to fully analyse thought that led to the creation, long after his death, of the technology. It was what made the technology possible in the first place—by making giving cognition functional analysis. Functionalism is the thread that runs through all science. It’s the insight that essentialism is a blind alley and that what something is, is what it does. Functionalism about thought—that minds are what brains do—came a hundred years before the technology that was the triumphant vindication of that insight.

So much for history. But it segues neatly into the second point that Epstein fails to appreciate. Cognitive scientists (unless they are very confused) do not think that human minds are computers. Rather—they think that computers—the physical objects on your desk top say—are just one way to make functions real. Functions are mathematical operations—but we shouldn’t get too hung up on numbers and equations here. You can turn an equation into a physical object yourself and you did it in high school (you called it “making a graph”) and there are many ways to see that functions can be realised in different ways.

Machines in general are ways to make abstracted functions into physical things. And that’s what cognitive scientists study. Functions. E.g. “How do we turn electromagnetic inputs into perceptions?” or “How do our past experiences function to make us wary of similar present dangers?”. The author’s frustration that cognitive scientists keep on thinking this way is simply misplaced. Hilariously he offers what he thinks is an alternative to functional thinking in explaining catching a ball:

“The IP [Information Processing] perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.

That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.”

Oh really? Ok. Build something that does it. I’ll lay a pound to a penny that it’s a lot more difficult than Epstein thinks. He has made the classic mistake (and especially egregious for a psychologist) in thinking that what seems simple to conscious access does not have a wealth of highly complex unconscious processing going on beneath the surface. The brilliant Hans Morovec gave his name to the general error of this kind: Morovec’s paradox. (3)

Morovec asked the question: Why is it that it takes the smartest humans to do things like fly planes, diagnose disease, and play chess; when we can make fairly stupid computers that beat all but the best of humans at this with ease? The other side of this coin is that things that we wrong thought would be computationally easy to programme—like walking up stairs, recognising faces, and so on turned out to be horribly difficult to get computers to do. Why was this? We were making the same mistake as the Epstein—forgetting that evolutionarily novel tasks (like chess) have their computational architecture laid bare. People who think a thing is easy simply haven’t given serious thought to the millions of years that went into making it so. That’s why they need smart people to do things like play chess, because there isn’t that much (in computational terms) to know (and the humans with the biggest brains know it better and faster). As the brilliant AI technician Rob Brooks pointed out, “Elephants don’t play chess”. Learning to be an elephant took millions of years prior to that particular elephant’s existence. (4)

Epstein gets more and more frustrated with the benighted community of cognitive scientists as he goes on. “To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system.”

Or—you could just listen to what that brain is saying? Through its attached mouth, for example? If someone was to object that what I’ve just said in response is a trick—that you don’t know every single thing that that particular brain is doing at any particular moment the appropriate response is: “So what?” I don’t know what every single one of the 1026 sub-atomic particles in my cup of coffee are doing on an individual basis either. But I know what they are doing on the aggregate as I pick up the cup. Because their collective action is called “heat”. And it’s the mean kinetic energy of the molecules in the liquid (e.g. the average amount of whizzing about they are doing—and good luck trying to map those on an individual basis). I don’t need to know everything in order to know anything.

We still have many problems to solve—but they are problems, not mysteries. And there is only one game in town to solve them: Functionalism. The alternative that Epstein offers—essentialism—has gone the way of astrology, alchemy, and homeopathy. And for the exact same reason. Essentialism comes from the pre-science time of humans. It’s magical thinking.

There is a saying that those who think that a task is impossible should not get in the way of those achieving it. The irony is that the opponents of cognitive science live in a world were aeroplanes fly themselves, machines govern investments, and artificial eyes can be spliced into the place of lost ones, directly into nervous systems. The functionalist account of the human brain isn’t something we are predicting. We are living in the midst of it. (5)

—Robert King

1) https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
See also Searle, J. R. (1990). Is the brain’s mind a computer program. Scientific American, 262(1), 26-31.
Other accounts drawing on these metaphor include

Daugman, J. G. (2001). Brain metaphor and brain theory.

de La Mettrie, J. O. (1960). L’Homme Machine, 1748. Man a Machine.

2) Boole, G. (1854). An investigation of the laws of thought: on which are founded the mathematical theories of logic and probabilities. Dover Publications.
I work at UCC and in honor of Boole’s bicentenary his house is being restored.

3) Moravec, H. P. (2000). Robot: Mere machine to transcendent mind. Oxford University Press on Demand

4) Brooks, R. A. (1990). Elephants don’t play chess. Robotics and autonomous systems, 6(1), 3-15.

5) King, R. J. I Can’t Get (no) Boolean Satisfaction (2016). Frontiers in Psychology. http://dx.doi.org/10.3389/fpsyg.2016.01880