The Recursive Mind (Review) – 4

3. Theory of Mind

Now, I know that you’re thinking. All this stuff about recursion and Julian Janes is a little bit tedious. I’m not interested at all. Why does he keep talking about this stuff, anyway? Jayne’s ideas are clearly preposterous–only an idiot would even consider them. I should quit reading, or maybe head over to Slate Star Codex or Ran Prieur, or maybe Reddit or Ecosophia or Cassandra’s Legacy or…

How do I know what you’re thinking (correctly or not)? It’s because I have a Theory of Mind (ToM), which allows me to imagine and anticipate what other people are thinking. So do you most likely, which is why you can detect a degree of self-deprecation in my statements above.

Theory of mind is the ability to infer the mental states of other people. It’s often referred to a a sort of “mind-reading.” Daniel Dennett called it the “intentional stance,” meaning that we understand that other people have intentions and motivations that are different from our own. It evolved because we have lived in complex societies that require cooperation and intelligence for millions of years. “According to the intentional stance, we interact with people according to what we think is going on in their minds, rather than in terms of their physical attributes…” (p 137)

The lack of understanding of other people’s perspectives is what Jean Piaget noticed most in children. Central to many of his notions is the idea that children are egocentric, where their own needs and desires are all that exists: “During the earliest stages the child perceives things like a solipsist who is unaware of himself as subject and is familiar only with his own actions.” In other words, the child is unable to recognize that other people have thoughts or feeling different from (or even in conflict with) their own. They are also unaware that others cannot see the same thing that they do. One way to test theory of mind in children is called the Sally-Anne test:

Click to enlarge. Then hit “back”

Theory of mind is also something that helps teach and learn. In order for me to effectively teach you, I need to have some idea of what you’re thinking so I can present the material in a way you can understand it. And, or course, you need to have some idea of what’s going on in my mind to understand what I’m trying to teach you. Theory of mind, therefore, is related to cultural transmission (or, more precisely, memetics). Human culture plays such an outsize role in our behavior partly because of our theory of mind. Theory of mind is a also a recursive operation which involves embedding your consciousness into someone else’s conscious mind:

From the point of view of this book, the important aspect of theory of mind is that it is recursive. This is captured by the different orders of intentionality… Zero-order intentionality refers to actions or behaviors that imply no subjective state, as in reflex or automatic acts. First-order intentionality involves a single subjective term, as in Alice wants Fred to go away. Second-order intentionality would involve two such terms, as in Ted thinks Alice wants Fred to go away. It is at this level that theory of mind begins.

And so on to third order: Alice believes that Fred thinks she wants him to go away. Recursion kicks in once we get beyond the first order, and our social life is replete with such examples. There seems to be some reason to believe, though, that we lose track at about the fifth or sixth order, perhaps because of limited working memory capacity rather than any intrinsic limit on recursion itself. We can perhaps just wrap our minds around propositions like: Ted suspects that Alice believes that he does indeed suspect that Fred thinks that she wants him (Fred) to go away. That’s fifth order, as you can tell by counting the words in bold type. You could make it sixth order by adding ‘George imagines that…’ at the beginning. p. 137

Clearly, higher orders of intentionality have been driven by the demands of the social environment one finds oneself in. I will later argue that these higher-order intentionalities developed when we moved to environments where the challenges we faced were predominantly natural (finding food, escaping predators, etc.), to one where the challenges were primarily social (managing workers, finding mates, leading armies, long-distance trading, negotiating debts, etc.). This change resulted in a fundamental remodeling of the human brain after settled civilization which allowed us to function in such social environments, probably by affecting the action of our serotonin receptors. We’ll get to that later.

Do you see what she sees?

Its not only one’s mental perspective, but even one’s physical perspective that ToM can let us take:

Whether instinctive or learned, the human ability to infer the mental states of others goes well beyond the detection of emotion. To take another simple and seemingly obvious example, we can understand what another individual can see. This is again an example of recursion, since we can insert that individual’s experience into our own. It is by no means a trivial feat, since it requires the mental rotation and transformation of visual scenes to match what the other person can see, and the construction of visual scenes that are not immediately visible.

For example, if you are talking to someone face-to-face, you know that she can see what is behind you, though you can’t. Someone standing in a different location necessarily see the world from a different angle, and to understand that person’s view requires and act of mental rotation and translation. pp. 134-135

I suspect this ability has something to do with out-of-body experiences, where we “see” ourselves from the perspective of somewhere outside our bodies. Recall Jaynes’s point that the “self” is not truly located in anywhere in physical space–including behind the eyes. Thus our “self” can theoretically locate itself anywhere, including the ceiling of our hospital room when we are dying.

Not everyone has theory of mind, though, at least not to the same degree. One of the defining characteristics of the autism spectrum is difficulty with ToM. Autistic people tend to not be able to infer what others are thinking, and this leads to certain social handicaps. Corballis makes a common distinction between “people-people” (as in, I’m a “people-person”–avoid anyone who describes themselves this way), and “things-people”, exemplified by engineers, doctors, scientists, programmers, professors, and such-like. “People-people” typically have a highly-developed ToM, which facilitates their feral social cunning. Technically-minded people, by contrast, often (though not always) have a less-developed theory of mind, as exemplified by this quote from the fictional Alan Turing in The Imitation Game: “When people talk to each other, they never say what they mean. They say something else and you’re expected to just know what they mean.”

Research has found autistic people who ace intelligence tests may still have trouble navigating public transportation or preparing a meal. Scoring low on a measure of social ability predicts an incongruity between IQ and adaptive skills. (Reddit)

One fascinating theory of autism that Corballis describes is based on a conflict between the mother’s and the father’s genes imprinting on the developing fetus in the womb:

In mammalian species, the only obligatory contribution of the male to the offspring is the sperm, and the father relies primarily on his genes to influence the offspring to behave in ways that support his biological interest.

Paternal genes should therefore favor self-interested behavior in the offspring, drawing on the mother’s resources and preventing her from using resources on offspring that might have been sired by other fathers. The mother, on the other hand, has continuing investment in the child both before birth…and after birth…Maternal genes should therefore operate to conserve her resources, favoring sociability and educability—nice kids who go to school and do what they’re told.

Maternal genes are expressed most strongly in the cortex, representing theory of mind, language, and social competence, whereas paternal genes tend to be expressed more in the limbic system, which deals with resource-demanding basic drives, such as aggression, appetites, and emotion. Autism, then, can be regarded as the extreme expression of paternal genes, schizophrenia as the extreme expression of maternal genes.

Many of the characteristics linked to the autistic and psychotic spectra are physical, and can be readily understood in terms of the struggle for maternal resources. The autistic spectrum is associated with overgrowth of the placenta, larger brain size, higher levels of growth factors, and the psychotic spectrum with placental undergrowth, smaller brain size, and slow growth…

Imprinting may have played a major role in human evolution. One suggestion is that evolution of the human brain was driven by the progressive influence of maternal genes, leading to expansion of the neocortex and the emergence of recursive cognition, including language and theory of mind. The persisting influence of paternal genes, though, may have preserved the overall balance between people people and things people, while also permitting a degree of difference.

Simon Baron-Cohen has suggested that the dimension can also be understood along an axis of empathizers versus systemizers. People people tend to empathize with others, through adopting the intentional stance and the ability to take the perspective of others. Things people may excel at synthesizing, through obsessive attention to detail and compulsive extraction of rules… pp. 141-142

I think this partly explains the popularity of libertarian economics among a certain set of the population, especially in Silicon Valley where people high on the autism spectrum tend to congregate. They tend to treat people as objects for their money-making schemes. They are unable to understand that people are not rational robots, and thus completely buy into the myth of Homo economocus. Their systemizing brains tend to see the Market as a perfect, frictionless, clockwork operating system (if only government “interference” would get out of the way, that is). It also explains why they feel nothing toward the victims of their “creative destruction.” It’s notable that most self-described libertarians tend to be males (who are often more interested in “things” and have a less developed theory of mind in general). In addition, research has shown that people who elect to study economics professionally have lower levels of empathy than the general population (who then shape economic theory to conform to their beliefs). This should be somewhat concerning, since economics, unlike physics or chemistry or meteorology, concerns people.

This sort of calculating self-centered hyper-rationality also lays behind the capitalist ethos.

The dark side of theory of mind is, of course, the ability to manipulate others. This has been referred to as Machiavellian intelligence, after Niccolo Machiavelli, the Italian diplomat who wrote about how rulers can manipulate the ruled to keep them in awe and obedience. It is certain that Machiavelli had a well-developed theory of mind, because he wrote stuff like this: “Now, in order to execute a political commission well, it is necessary to know the character of the prince and those who sway his counsels; … but it is above all things necessary to make himself esteemed, which he will do if he so regulates his actions and conversation that he shall be thought a man of honour, liberal, and sincere…It is undoubtedly necessary for the ambassador occasionally to mask his game; but it should be done so as not to awaken suspicion and he ought also to be prepared with an answer in case of discovery.” (Wikiquote) . In fact, CEO’s and middle-managers tend to be consummate social manipulators—it’s been shown using psychological tests that successful CEO’s and politicians consistently score higher on traits of sociopathy than the general population.

There may be a dark side to social intelligence, though, since some unscrupulous individuals may take advantage of the cooperative efforts of others, without themselves contributing. These individuals are known as freeloaders. In order to counteract their behavior, we have evolved ways of detecting them. Evolutionary psychologists refer to a “cheater-detection module” in the brain that enables us to detect these imposters, but they in turn have developed more sophisticated techniques to escape detection.

This recursive sequence of cheater detection and cheater detection-detection has led to what has been called a “cognitive arms race,” perhaps first identified by the British evolutionary theorist Robert Trivers, and later amplified by other evolutionary psychologists. The ability to take advantage of others through such recursive thinking has been termed Machiavellian intelligence, whereby we use social strategies not merely to cooperate with our fellows, but also to outwit and deceive them…p. 136

It’s been argued (by me, for instance) that a hyperactive “cheater detection module,” often allied with lower levels of empathy, is what lays behind politically conservative beliefs. I would posit, too, that it also underlies many of the misogynistic attitudes among the so-called “Alt-Right”, since their theory of mind is too poorly developed to understand women’s thinking well enough to have positive interactions with them (instead preferring submission and obedience). A tendency toward poor ToM, in my opinion, explains a lot of seemingly unrelated characteristics of the Alt-right (economic libertarianism, misogyny, racism, technophilia, narcissism, atheism, hyper-rationality, ultra-hereditarianism, “political incorrectness” etc.)

Theory of mind appears to be more developed among women than men, probably because of their childrearing role. Many men can relate to the hyperactive tendency of their wives or girlfriends to “mind read” (“What are you thinking right now?”) and claim that they are correct in their inferences (“I know you’re thinking about your ex..!”).

Theory of Mind has long been seen as fundamental to the neuroscience of religious belief. The ability to attribute mental states to other people leads to attributing human-like attributes and consciousness to other creatures, and even things. I’ve you’ve ever hit your computer for “misbehaving” or kicked your car for breaking down on you, then you know what I’m talking about. The tendency to anthropomorphize is behind the misattribution of human traits and behaviors to non-human animals, viz:

According to Robin Dunbar, it is through Theory of Mind that people may have come to know God, as it were. The notion of a God who is kind, who watches over is, who punishes, who admits us to Heaven if we are suitably virtuous, depends on the underlying understanding that other beings—in this case a supposedly supernatural one—can have human-like thoughts and emotions.

Indeed Dunbar argues that several orders of intentionality may be required, since religion is a social activity, dependent on shared beliefs. The recursive loops that are necessary run something like this: I suppose that you think that I believe there are gods who intend to influence our futures because they understand our desires. This is fifth-order intentionality. Dunbar himself must have achieved sixth-order intentionality if he supposes all of this, and if you suppose that he does then you have reached seventh-order…

If God depends on theory of mind, so too, perhaps, does the concept of the self. This returns us to the opening paragraph of this book, and Descartes famous syllogism “I think , therefore I am.” since he was appealing to his own thought about thinking, this is second-order intentionality. Of course, we also understand the self to continue through time, which requires the (recursive) understanding that our consciousness also transcends the present. pp. 137-138 (emphasis mine)

Thus, higher-order gods tend to emerge at a certain point of socio-political complexity, where higher-order states of mind are achieved by a majority of people. A recent paper attempted to determine whether so-called “Moralizing High Gods” (MHG) and “Broad Supernatural Punishers” (BSP) is what allowed larger societies to form, or were rather the result of larger societies and the need to hold them together. The authors concluded the latter:

Do “Big Societies” Need “Big Gods”? (Cliodynamica)

Moralizing Gods as Effect, Not Cause (Marmalade)


Here’s evolutionary psychologist Robin Dunbar explaining why humans appear to be the only primates with the higher-order intentionality necessary to form Moralizing High Gods and Broad Supernatural Punishers:

We know from neuroimaging experiments that mentalizing competencies correlate with the volume of the mentalizing network in the brain, and especially with the volume of the orbitofrontal cortex, and this provides important support for the claim that, across primates, mentalizing competencies correlate with frontal lobe volume. Given this, we can…estimate the mentalizing competencies of fossil hominins, since they must, by definition, be strung out between the great apes and modern humans…As a group, the australopithecenes cluster nicely around second-order intentionality, along with other great apes; early Homo populations all sit at third-order intentionality, while archaic humans and Neanderthals can just about manage fourth order; only fossil [Anatomically Modern Humans] (like their living descendants) achieve fifth order. Human Evolution: Our Brains and Behavior by Robin Dunbar, p. 242

… The sophistication of one’s religion ultimately depends on the level of intentionality one is capable of. While one can certainly have religion of some kind with third or fourth order intentionality, there seems to be a real phase shift in the quality of religion that can be maintained once one achieves fifth order intentionality. Given that archaic humans, including Neanderthals, don’t appear to have been more than fourth order intentional, it seems unlikely that they would have had religions of very great complexity. Quite what that means remains to be determined, but the limited archaeological evidence for an active religious life among archaics suggests that, at best, it wasn’t very sophisticated. Human Evolution: Our Brains and Behavior by Robin Dunbar, pp. 285-286

A hyperactive Theory of Mind has long been suspected as playing a role in religious belief, as well as in schizophrenia, in which intentionality has run amok, leading to paranoia and hallucinations (objects talking to you, etc.):

One of the most basic insights of the cognitive science of religion is that religions the world over and throughout human history have reliably evolved so as to involve representations that engage humans’ mental machinery for dealing with the social world. After all, such matters enthrall human minds. The gods and, even more fundamentally, the ancestors are social agents too! On the basis of knowing that the gods are social actors, religious participants know straightaway that they have beliefs, intentions, feelings, preferences, loyalties, motivations, and all of the other states of mind that we recognize in ourselves and others.

What this means is, first, that religious participants are instantly entitled to all of the inferences about social relations, which come as defaults with the development of theory of mind, and, second, that even the most naïve participants can reason about them effortlessly. Such knowledge need not be taught. We deploy the same folk psychology that we utilize in human commerce to understand, explain, and predict the gods’ states of mind and behaviors.

How Religions Captivate Human Minds (Psychology Today)

What Religion is Really All About (Psychology Today)

Most potently for our discussion of Julian Jaynes’s theories is the fact that fMRI scans have shown that auditory hallucinations—of the type the Jaynes described as the basis of ancient belief in gods—activate brain regions associated with Theory of Mind. Here’s psychologist Charles Fernyhough:

…When my colleagues and I scanned people’s brains while they were doing dialogic inner speech, we found activation in the left inferior frontal gyrus, a region typically implicated in inner speech. But we also found right hemisphere activation close to a region known as the temporoparietal junction (TPJ)…that’s an area that is associated with thinking about other people’ minds, and it wasn’t activated when people were thinking monologically…Two established networks are harnessed for the purpose of responding to the mind’s responses in an interaction that is neatly cost-effective in terms of processing resources. Instead of speaking endlessly without expectation of an answer, the brain’s work blooms into dialogue… The Voices Within by Charles Fernyhough; pp. 107-108 (emphasis mine)

Theory of mind is also involved with the brain’s Default mode network (DMN), a pattern of neural activity that takes place during mind-wandering, and seems to be largely responsible for the creation of the “unitary self.” It’s quite likely that the perception of the inner-voice as belonging to another being with it’s own personality traits, as Jaynes described, activates our inbuilt ToM module, as do feelings of an “invisible presence” also reported by non-clinical voice hearers.

The default-mode network covers large regions of the brain, mainly in the areas not directly involved in perceiving the world or responding to it. The brain is a bit like a small town, with people milling around, going about their business. When some big event occurs, such as a football game, the people then flock to the football ground, while the rest of the town goes quiet. A few people come from outside, slightly increasing the population. But it’s not the football game we’re interested in here. Rather, it’s the varied business of the town, the give and take of commerce, the sometimes meandering activity of people in their communities and places of work. So it is in the brain. When the mind is not focused on some event, it wanders. The Wandering Mind by Michael C. Corballis, p. 7

Theory of mind is also critical for signed and spoken language. After all, I need to have some idea what’s going on in your mind in order to get my point across. The more I can insert myself into your worldview, the more effectively I can tailor my language to communicate with you, dear reader. Hopefully, I’ve done a decent job, (if you didn’t leave after the first paragraph, that is!) It also encourages language construction and development. In our earlier example, one would hope that the understanding of metaphor is sufficient that we implicitly understand that inosculation does not literally involve things kissing each other!

There is evidence to believe that the development of theory of mind is closely intertwined with language development in humans. One meta-analysis showed a moderate to strong correlation (r = 0.43) between performance on theory of mind and language tasks. One might argue that this relationship is due solely to the fact that both language and theory of mind seem to begin to develop substantially around the same time in children (between ages 2–5). However, many other abilities develop during this same time period as well, and do not produce such high correlations with one another nor with theory of mind. There must be something else going on to explain the relationship between theory of mind and language.

Pragmatic theories of communication assume that infants must possess an understanding of beliefs and mental states of others to infer the communicative content that proficient language users intend to convey. Since a verbal utterance is often underdetermined, and therefore, it can have different meanings depending on the actual context. Theory of mind abilities can play a crucial role in understanding the communicative and informative intentions of others and inferring the meaning of words. Some empirical results suggest that even 13-month-old infants have an early capacity for communicative mind-reading that enables them to infer what relevant information is transferred between communicative partners, which implies that human language relies at least partially on theory of mind skills….

Theory of Mind (Wikipedia)

Irony, metaphor, humor, and sarcasm are all examples of how language and theory of mind are related. Irony involves a knowing contrast between what is said and what is meant, meaning that you need to be able to infer what another person was thinking. “Irony depends on theory of mind, the secure knowledge that the listener understands one’s true intent. It is perhaps most commonly used among friends, who share common attitudes and threads of thought; indeed it has been estimated that irony is used in some 8 percent of conversational exchanges between friends.” (pp. 159-160) Sarcasm also relies on understanding the difference between what someone said and what they meant. I’m sure you’ve experienced an instance when someone writes some over-the-top comment on an online forum intended to sarcastically parody a spurious point of view, and some reader takes it at face value and loses their shit. It might be because we can’t hear the tone of voice or see the body language of the other person, but I suspect it also has something to do with the high percentage of high-spectrum individuals who frequent such message boards.

Metaphor, too relies on a non-literal understanding of language. If the captain calls for “all hands on deck,” it is understood that he wants more than just our hands, and that we aren’t supposed to place our hands down on the deck. If it’s “raining cats and dogs,” most of us know that animals are not falling out of the sky. Jokes, too rely on theory of mind.

Theory of mind allows for normal individuals to use language in a loose way that tends not to be understood by those with autism. Most of us, if asked the question “Would you mind telling me the time?” would probably answer with the time, but an autistic individual would be more inclined to give a literal answer, which might be something like “No, I don’t mind.” Or if you ask someone whether she can reach a certain book, you might expect her to reach for the book and hand it to you, but an autistic person might simply respond yes or no. This reminds me that I once made the mistake of asking a philosopher, “Is it raining or snowing outside?”–wanting to know whether I should grab an umbrella or a warm coat. He said, “Yes.” Theory of mind allows is to use our language flexible and loosely precisely because we share unspoken thoughts, which serve to clarify or amplify the actual spoken message. pp. 160-161

If you do happen to be autistic, and all the stuff I just said goes over your head, don’t fret. I have enough theory of mind to sympathize with your plight. Although, if you are, you might more easily get this old programmer joke:

A programmer is at work when his wife calls and asks him to go to the store. She says she needs a gallon of milk, and if they have fresh eggs, buy a dozen. He comes home with 12 gallons of milk.

The relationship between creativity, mechanical aptitude, genius, and mental illness is complex and poorly understood, but has been a source of fascination for centuries. Often times creative people were thought to be “possessed” by something outside of their own normal consciousness or abilities:

Recent evidence suggests that a particular polymorphism on a gene known to be related to the risk of psychosis is also related to creativity in people with high intellectual achievement.

The tendency to schizophrenia or bipolar disorder may underlie creativity in the arts, as exemplified by musicans such as Bela Bartok, Ludwig van Beethoven, Maurice Ravel, or Peter Warlock, artists such as Amedeo Clemente Modigliani, Maurice Utrillo, or Vincent van Gogh, and writers such as Jack Kerouac, D. H. Lawrence, Eugene O’Neill, or Marcel Proust. The esteemed mathematician John Forbes Nash, subject of the Hollywood movie A Beautiful Mind, is another example. The late David Horrobin went so far as to argue that people with schizophrenia were regarded as the visionaries who shaped human destiny itself, and it was only with the Industrial Revolution, and a change m diet, that schizophrenics were seen as mentally ill. p. 143

Horrobin’s speculations are indeed fascinating, and only briefly alluded to in the text above:

Horrobin…argues that the changes which propelled humanity to its current global ascendancy were the same as those which have left us vulnerable to mental disease.

‘We became human because of small genetic changes in the chemistry of the fat in our skulls,’ he says. ‘These changes injected into our ancestors both the seeds of the illness of schizophrenia and the extraordinary minds which made us human.’

Horrobin’s theory also provides support for observations that have linked the most intelligent, imaginative members of our species with mental disease, in particular schizophrenia – an association supported by studies in Iceland, Finland, New York and London. These show that ‘families with schizophrenic members seem to have a greater variety of skills and abilities, and a greater likelihood of producing high achievers,’ he states. As examples, Horrobin points out that Einstein had a son who was schizophrenic, as was James Joyce’s daughter and Carl Jung’s mother.

In addition, Horrobin points to a long list of geniuses whose personalities and temperaments have be-trayed schizoid tendencies or signs of mental instability. These include Schumann, Strindberg, Poe, Kafka, Wittgenstein and Newton. Controversially, Horrobin also includes individuals such as Darwin and Faraday, generally thought to have displayed mental stability.

Nevertheless, psychologists agree that it is possible to make a link between mental illness and creativity. ‘Great minds are marked by their ability to make connections between unexpected events or trends,’ said Professor Til Wykes, of the Institute of Psychiatry, London. ‘By the same token, those suffering from mental illness often make unexpected or inappropriate connections between day-to-day events.’

According to Horrobin, schizophrenia and human genius began to manifest themselves as a result of evolutionary pressures that triggered genetic changes in our brain cells, allowing us to make unexpected links with different events, an ability that lifted our species to a new intellectual plane. Early manifestations of this creative change include the 30,000-year-old cave paintings found in France and Spain…

Schizophrenia ‘helped the ascent of man’ (The Guardian)

Writers May Be More Likely to Have Schizophrenia (PsychCentral)

The link between mental illness and diet is intriguing. For example, the popular ketogenic diet was originally developed not to lose weight, but to treat epilepsy! And, remarkably, a recent study has show that a ketogenic diet has caused remission of long-standing schizophrenia in certain patients. Recall that voice-hearing is a key symptom of schizophrenia (as well as some types of epilepsy). Was a change in diet partially responsible for what Jaynes referred to as bicameralism?

The medical version of the ketogenic diet is a high-fat, low-carbohydrate, moderate-protein diet proven to work for epilepsy. …While referred to as a “diet,” make no mistake: this is a powerful medical intervention. Studies show that over 50 percent of children with epilepsy who do not respond to medications experience significant reductions in the frequency and severity of their seizures, with some becoming completely seizure-free.

Using epilepsy treatments in psychiatry is nothing new. Anticonvulsant medications are often used to treat psychiatric disorders. Depakote, Lamictal, Tegretol, Neurontin, Topamax, and all of the benzodiazepines (medications like Valium and Ativan, commonly prescribed for anxiety) are all examples of anticonvulsant medications routinely prescribed in the treatment of psychiatric disorders. Therefore, it’s not unreasonable to think that a proven anticonvulsant dietary intervention might also help some people with psychiatric symptoms.

Interestingly, the effects of this diet on the brain have been studied for decades because neurologists have been trying to figure out how it works in epilepsy. This diet is known to produce ketones which are used as a fuel source in place of glucose. This may help to provide fuel to insulin resistant brain cells. This diet is also known to affect a number of neurotransmitters and ion channels in the brain, improve metabolism, and decrease inflammation. So there is existing science to support why this diet might help schizophrenia.

Chronic Schizophrenia Put Into Remission Without Medication (Psychology Today)

4. Kinship

The Sierpinski triangle provides a good model for human social organization

Although not discussed by Corballis, kinship structures are also inherently recursive. Given that kinship structures form the primordial organizational structure for humans, this is another important feature of human cognition that appears to derive from our recursive abilities. For a description of this, we’ll turn once again to Robin Dunbar’s book on Human Evolution. Dunbar (of Dunbar’s number fame) makes the case that the ability to supply names of kin members may be the very basis for spoken language itself!

There is one important aspect of language that some have argued constitutes the origin of language itself – the naming of kin.

There is no particular reason to assume that ability to name kin relationships was in any way ancestral, although it may well be the case that naming individuals appeared very early. One the other hand, labeling kinship categories (brother, sister, grandfather, aunt, cousin) is quite sophisticated: it requires us to make generalizations and create linguistic categories. And it probably requires us to be able to handle embeddedness, since kinship pedigrees are naturally embedded structures.

Kinship labels allow is to sum in a single word the exact relationship between two individuals. The consensus among anthropologists is that there are only about six major types of kinship naming systems – usually referred to as Hawaiian, Eskimo, Sudanese, Crow, Omaha and Iroquois after the eponymous tribes that have these different kinship naming systems. They differ mainly in terms of whether they distinguish parallel from cross cousins and whether descent is reconed unilaterally or bilaterally.

The reasons why these naming systems differ have yet to be explained satisfactorally. Nonetheless, given that one of their important functions is to specify who can marry whom, it is likley that they reflect local variations in mating and inheritance patterns. The Crow and Omaha kinship naming systems, for example, are mirror images of each and seem to be a consequence of differing levels of paternity certainty (as a result, one society is patrilineal, the other matrilineal). Some of these may be accidents of cultural history, while others may be due to the exigencies of the local ecology. Kinship naming systems are especially important, for example, when there are monpolizable resources like land that can be passed on from one generation to the next and it becomes crucial to know just who is entitled, by descent, to inherit. Human Evolution: Our Brains and Behavior, by Robin Dunbar; pp. 272-273

Systems of kinship appear to be largely based around the means of subsistance and rules of inheritance. Herders, for example, tend to be patriarchal, and hence patrilineal. The same goes for agrarian societies where inheritance of arable land is important. Horticultural societies, by contrast, are often more matrilineal, reflecting women’s important role in food production. Hunter-gatherers, where passing down property is rare, are often bilateral. These are, of course, just rules of thumb. Sometimes tribes are divided into two groups, which anthropolgists call moieties (from the French for “half”), which are designed to prevent inbreeding (brides are exchanged exclusively across moieties).

Anthropologists have sometimes claimed that biology cannot explain human kinship naming systems because many societies classify biologically unrelated individuals as kin. This is a specious argument for two separate reasons. One is that the claim is based on a naive understanding of what biological kinship is all about.

This is well illustrated by how we treat in-laws. In English, we classify in-laws (who are biologically unrelated to us) using the same kin terms that we use for real biological relatives (father-in-law, sister-in-law, etc.). However…we actually treat them, in emotional terms, as though they were real biological kin, and we do so for a very good biological reason: they share with us a common genetic interest in the next generation.

We tend to think of genetic relatedness as reflecting past history (i.e. how two people are related in a pedigree that plots descent from some common ancestor back in time). But in fact, biologically speaking, this isn’t really the issue, although it is a convenient approximation for deciding who is related to whom. In an exceptionally insightful but rarely appreciated book (mainly because it is very heavy on maths), Austen Hughes showed that the real issue in kinship is not relatedness back in time but relatedness to future offspring. In-laws have just as much stake in the offspring of a marriage as any other relative, and hence should be treated as though they are biological relatives. Hughes showed that this more sophisticated interpretation of biological relatedness readily explains a large number of ethnographic examples of kinship naming and co-residence that anthropologists have viewed as biologically inexplicable. Human Evolution: Our Brains and Behavior, by Robin Dunbar; pp. 273-277

As a sort of proof of this, many of the algorithms that have been developed to determine genetic relatedness between individuals (whether they carry the same genes) are recursive! It’s also notable that the Pirahã, whose language allegedly does not use recursion, also do not have extended kinship groups (or ancestor worship or higher-order gods for that matter. In fact, they are said to live entirely in the present, meaning no mental time travel either).

Piraha Indians, Recursion, Phonemic Inventory Size and the Evolutionary Significance of Simplicity (Anthropogenesis)

The second point is that in traditional small-scale societies everyone in the community is kin, whether by descent or by marriage; those fewwho aren’t soon become so by marrying someone or by being given some appropriate status as fictive or adoptive kin. The fact that some people are misclassified as kin or a few strangers are granted fictional kinship status is not evidence that kinship naming systems do not follow biological principles: a handful of exceptions won’t negate the underlying evolutionary processes associated with biological kinship, not least because everything in biology is statistical rather than absolute. One would need to show that a significant proportion of naming categories cross meaningful biological boundaries, but in fact they never do. Adopted children can come to see their adoptive parents as their real parents, but adoption itself is quite rare; moreover, when it does occur in traditional societies it typically involves adoption by relatives (as anthropological studies have demonstrated). A real sense of bonding usually happens only when the child is very young (and even then the effect is much stronger for the child than for the parents – who, after all, know the child is not theirs).

Given that kinship naming systems seem to broadly follow biological categories of relatedness, a natural assumption is that they arise from biological kin selection theory… It seems we have a gut response to help relatives preferentially, presumably as a consequence of kin selection…Some of the more distant categories of kin (second and third cousins, and cousins once removed, as well as great-grandparents and great-great-grandparents) attract almost as strong a response from us as close kin. Yet these distant relationships are purely linguistic categories that someone has labelled for us (‘Jack is your second cousin -you share a great-grandmother’). The moment you are told that somebody is related to you, albeit distantly, it seems to place them in a very different category from mere friends, even if you have never met them before…You only need to know one thing about kin- that they are related to us ( and maybe exactly how closely they are related) whereas with a friend we have to track back through all the past interactions to decide how they actually behaved on different occasions. Because less processing has to be done, decisions about kin should be done faster and at less cognitive cost than decisions about unrelated individuals. This would imply that, psychologically, kinship is an implicit process (i.e. it is automated), whereas friendship is an explicit process (we have to think about it)…

It may be no coincidence that 150 individuals is almost exactly the number of living descendants (i.e. members of the three currently living generations: grandparents, parents and children) of a single ancestral pair two generations back (i.e. the great-great-grandparents) in a society with exogamy (mates of one sex come from outside the community, while the other sex remains for life in the community into which it was born). This is about as far back as anyone in the community can have personal knowledge about who is whose offspring so as to be able to vouch for how everyone is related to each other. It is striking that no kinship naming system identifies kin beyond this extended pedigree with its natural boundary at the community of 150 individuals. It seems as though our kinship naming systems may be explicitly designed to keep track of and maintain knowledge about the members of natural human communities. Human Evolution: Our Brains and Behavior, by Robin Dunbar; pp. 273-277

Corballis concludes:

Recursion, then, is not the exclusive preserve of social interaction. Our mechanical world is as recursively complex as is the social world. There are wheels within wheels, engines within engines, computers within computers. Cities are containers built of containers within containers, going right down, I suppose, to handbags and pockets within our clothing. Recursive routines are a commonplace in computer programming, and it is mathematics that gives us the clearest idea of what recursion is all about. But recursion may well have stemmed from runaway theory of mind, and been later released into the mechanical world. p. 144

In the final section of The Recursive Mind, Corballis takes a quick tour through human evolution to see when these abilities may have first emerged. That’s what we’ll take a look at in our last installment of this series.

The Recursive Mind (Review) – 3

Part 1

Part 2

2. Mental Time Travel

The word “remembering” is used loosely and imprecisely. There are actually multiple different types of memory; for example, episodic memory and semantic memory.

Episodic memory: The memory of actual events located in time and space, i.e “reminiscing.”

Semantic memory: The storehouse of knowledge that we possess, but which does not involve any kind of conscious recollection.

Semantic memory refers to general world knowledge that we have accumulated throughout our lives. This general knowledge (facts, ideas, meaning and concepts) is intertwined in experience and dependent on culture.

Semantic memory is distinct from episodic memory, which is our memory of experiences and specific events that occur during our lives, from which we can recreate at any given point. For instance, semantic memory might contain information about what a cat is, whereas episodic memory might contain a specific memory of petting a particular cat.

We can learn about new concepts by applying our knowledge learned from things in the past. The counterpart to declarative or explicit memory is nondeclarative memory or implicit memory.

Semantic memory (Wikipedia)

Episodic memory is essential for creating of the narrative self. Episodic memory takes various forms, for example:

Specific events: When you first set foot in the ocean.

General events: What it feels like stepping into the ocean in general. This is a memory of what a personal event is generally like. It might be based on the memories of having stepped in the ocean, many times during the years.

Flashbulb memories: Flashbulb memories are critical autobiographical memories about a major event.

Episodic Memory (Wikipedia)

For example, if you are taking a test for school, you are probably not reminiscing about the study session you had the previous evening, or where you need to be the next class period. You are probably not thinking about your childhood, or about the fabulous career prospects that are sure to result from passing this test. Those episodic memories—inserting yourself into past or future scenarios—would probably be a hindrance from the test you are presently trying to complete. Semantic memory would be what you are drawing upon to answer the questions (hopefully correctly).


It is often difficult to distinguish between one and the other. Autobiographical memories are often combinations of the two—lived experience combined with autobiographical stories and family folklore. Sometimes, we can even convince ourselves that things that didn’t happen actually did (false memories). Our autobiographical sense of self is determined by this process.

Endel Tulving has described remembering as autonoetic, or self-knowing, in that one has projected one’s self into the past to re-experience some earlier episode. Simply knowing something, like the boiling point of water, is noetic, and implies no shift of consciousness. Autoneotic awareness, then, is is recursive, in that one can insert previous personal experience into present awareness. This is analogous to the embedding of phrases within phrases, or sentences within sentences.

Deeper levels of embedding are also possible, as when I remember yesterday that I had remember yesterday that I had remembered an event that occurred at some earlier time. Chunks of episodic awareness can thus be inserted into each other in recursive fashion. Having coffee at a conference recently, I was reminded of an earlier conference where I managed to spill coffee on a distinguished philosopher. This is memory of a memory of an event. I shall suggest later that this kind of embedding may have set the state for the recursive structure of language itself (p. 85) [Coincidentally, as I was typing this paragraph, I spilled coffee on the book. Perhaps you will spill coffee on your keyboard while reading this. – CH]

Corballis mentions that case of English musician Clive Wearing, whose hippocampus was damaged leading to anteriograde and retrograde amnesia. At the other end of the spectrum is the Russian Solomon Shereshevsky.

The Abyss (Oliver Sacks, The New Yorker)

Long-term memory can further be subdivided into implicit memory and explicit (or declarative) memory.

“Implicit memories are elicited by the immediate environment, and do not involve consciousness or volition.” (p. 98) … Implicit memory…enables us to learn without any awareness that we are doing so. It is presumably more primitive in an evolutionary sense than is explicit memory, which is made up of semantic and episodic memory. Explicit memory is sometimes called declarative memory because it is the kind of memory we can talk about or declare.

Implicit memory does not depend on the hippocampus, so amnesia resulting from hippocampal damage does not entirely prevent adaptation to new environments or condition, but such adaptation does not enter consciousness. p. 88 (emphasis mine)
Explicit memories, by contrast, “provide yet more adaptive flexibility, because it does not depend on immediate evocation from the environment” p. 98 (emphasis mine)

The textbook case of implicit memory is riding a bicycle. You don’t think about, or ponder how to do it, you just do it. No amount of intellectual thought and pondering and thinking through your options will help you to swim or ride a bike or play the piano. When a line drive is hit to the shortstop, implicit memory, not explicit memory catches the ball (although the catch might provide a nice explicit memory for the shortstop later on). A daydreaming shortstop wold miss the ball completely.

Words are stored in semantic memory, and only rarely or transiently in episodic memory. I have very little memory of the occasions on which I learned the meanings of the some 50,000 words that I know–although I can remember occasionally looking up obscure words that I didn’t know, or that had escaped my semantic memory. The grammatical rules by which we string words together may be regarded as implicit rather than explicit memory, as automatic, perhaps, as riding a bicycle. Indeed, so automatic are the rules of grammar that linguists have still not been able to elaborate all of them explicitly. p. 126 (emphasis mine)

Operant conditioning (also called signal learning, solution learning, or instrumental learning) is another type of learning that does not require conscious, deliberative thought. It is a simple stimulus and response. You touch the stove, and you know the stove is hot. There was no thinking involved when Pavlov’s dogs salivated at the sound of a bell, for example. In a very unethical experiment, the behaviorist John B. Watson took a nine-month old orphan and conditioned him to be afraid of rats, rabbits, moneys, dogs and masks. He did this by making a loud, sharp noise (banging a metal bar with a hammer), which the child was afraid of, whenever the child was presented with those things. By associating the sound with the stimulus, he was able to induce a fear of those items. But there was no volition; no conscious thought was involved in this process. It works the same way on dogs, rabbits, humans or fruit flies. Behvariorism tells us next to nothing about human consciousness, or what makes us different.

These types of conditioning may be said to fall under the category of implicit memory. As we have seen, implicit memory may also include the learning of skills and even mental strategies to cope with environmental challenges. Implicit memories are elicited by the immediate environment, and do not involve consciousness or volition. Of course, one may remember the experience of learning to ride a bicycle, but that is distinct from the learning itself…These are episodic memories, independent of the process of actually learning (more or less) to ride the bike. p. 98 (emphasis mine, italics in original)

This important distinction is what is behind Jaynes’s declaration that learning and remembering do not require consciousness. Implicit memory and operant conditioning do not require the kind of deliberative self-consciousness or “analog I” that Jaynes described. Even explicit memory—the ability to recall facts and details, for example—does not, strictly speaking, require deliberative self-consciousness. Clive Wearing, referred to above, could still remember how to play the piano, despite living in an “eternal present.” Thus, it is entirely possible that things such as ruminative self-consciousness emerged quite late in human history. Jaynes himself described why consciousness (as distinct from simply being functional and awake) is not required for learning, and can even be detrimental to it.

In more everyday situations, the same simple associative learning can be shown to go on without any consciousness that it has occurred. If a distinct kind of music is played while you are eating a particularly delicious lunch, the next time you hear the music you will like its sounds slightly more and even have a little more saliva in your mouth. The music has become a signal for pleasure which mixes with your judgement. And the same is true for paintings. Subjects who have gone through this kind of test in the laboratory, when asked why they liked the music or paintings better after lunch, could not say. They were not conscious they had learned anything. But the really interesting thing here is that if you know about the phenomenon beforehand and are conscious of the contingency between food and the music or painting, the learning does not occur. Again, consciousness reduces our learning abilities of this type, let alone not being necessary for them…

The learning of complex skills is no different in this respect. Typewriting has been extensively studied, it generally being agreed in the worlds of one experimenter “that all adaptations and short cuts in methods were unconsciously made, that is, fallen into by the learners quite unintentionally.” The learners suddenly noticed that they were doing certain parts of the work in a new and better way.

Another simple experiment can demonstrate this. Ask someone to sit opposite you and to say words, as many words as he can think of, pausing two or three seconds after each of them for you to write them down. If after every plural noun (or adjective, or abstract word, whatever you choose) you say “good” or “right” as you write it down, or simply “mmm-hmm” or smile, or repeat the plural word pleasantly, the frequency of plural nouns (or whatever) will increase significantly as he goes on saying the words. The important thing here is that the subject is not aware that he is learning anything at all. He is not conscious that he is trying to find a way to make you increase your encouraging remarks, or even of his solution to that problem. Every day, in all our conversations, we are constantly training and being trained by each other in this manner, and yet we are never conscious of it. OoCitBotBM; pp. 33-35

But we not only use our memory to recall past experiences, we also think about future events as well, and this is based on the same ability to mentally time travel. It may seem paradoxical to think of memory as having anything to do with events that haven’t happened yet, but brain scans show that similar areas of the brain are activated when recalling past events and envisioning future ones—particularly the prefrontal cortex, but also parts of the medial temporal lobe. There is slightly more activity in imagining future events, probably due to the increased creativity required of this activity.


In this ability to mentally time travel we seem to be unique among animals, at least at to the extent that we do it and our abilities to do so:

So far, there is little convincing evidence that animals other than humans are capable of mental time travel—or if they are, their mental excursions into past or future have little of the extraordinary flexibility and broad provenance that we see in our own imaginative journeys. The limited evidence from nonhuman animals typically comes from behaviors that are fundamentally instinctive, such as food caching or mating, whereas in humans mental time travel seems to cover all aspects of our complex lives. p. 112

Animals Are ‘Stuck In Time’ With Little Idea Of Past Or Future, Study Suggests (Science Daily)

However, see: Mental time-travel in birds (Science Daily)

We are always imagining and anticipating, from thinking about events later the same day, or perhaps years from now. Even in a conversation, we are often planning what we are about to say, rather than focusing on the conversation itself. That is, we are often completely absent in the present moment, which is something that techniques like mindfulness meditation are designed to mitigate. We can even imagine events after we are dead, and it has been argued that this knowledge lays behind many unique human behaviors such as the notion of an afterlife and the idea of religion more generally. The way psychologists study this is to use implicit memory (as described above) to remind people of their own mortality. This is done through a technique called priming:

Priming is remarkably resilient. In one study, for example, fragments of pictures were used to prime recognition of whole pictures of objects. When the same fragments were shown 17 years later to people who had taken part in the original experiment, they were able to write the name of the object associated with each fragment much more accurately than a control group who had not previously seen the fragments. p. 88

When primed with notions of death and their own mortality, it has been shown that people in general are more authoritarian, more aggressive, more hostile to out-groups and simultaneously more loyal to in-groups. Here’s psychologist Sheldon Solomon describing the effect in a TED Talk:

“Studies show that when people are reminded of their mortality, for example, by completing a death anxiety questionnaire, or being interviewed in front of a funeral parlor, or even exposed to the word ‘death’ that’s flashed on a computer screen so fast—28 milliseconds—that you don’t know if you’ve even seen anything—When people are reminded of their own death, Christians, for example, become more derogatory towards Jews, and Jews become more hostile towards Muslims. Germans sit further away from Turkish people. Americans who are reminded of death become more physically aggressive to other Americans who don’t share their political beliefs. Iranians reminded of death are more supportive of suicide bombing, and they’re more willing to consider becoming martyrs themselves. Americans reminded of their mortality become more enthusiastic about preemptive nuclear, chemical and biological attacks against countries who pose no direct threat to us. So man’s inhumanity to man—our hostility and disdain toward people who are different—results then, I would argue, at least in part from our inability to tolerate others who do not share the beliefs that we rely on to shield ourselves from mortal terror.”

Humanity at the Crossroads (YouTube)

One important aspect of episodic memory is that it locates events in time. Although we are often not clear precisely when remembered events happened, we usually have at least a rough idea, and this is sufficient to give rise to the general understanding of time itself. It appears that locating events in time and in space are related.

Episodic memory allows us to travel back in time, and consciously relive previous experiences. Thomas Suddendorf called this mental time travel, and made the important suggestion that mental time travel allows us to imagine future events as well as remember past ones. It also adds to the recursive possibilities; I might remember, for example, that yesterday I had plans to go to the beach tomorrow.The true significance of episodic memory, then is that it provides a vocabulary from which to construct future events, and so fine-tune our lives.

What has been termed episodic future thinking, or the ability to imagine future events, emerges in children at around the same time as episodic memory itself, between the ages of three and four. Patients with amnesia are as unable to answer questions about past events as they are to say what might happen in the future… p. 100

Once again, the usefulness of this will be determined by the social environment. I will argue later that this ability to mentally time travel, as with the ability to “read minds” (which we’ll talk about next) became more and more adaptive over time as societies became more complex. For example, it would play little to no role among immediate return hunter gatherers (such as the Pirahã), who live mostly in the present and do not have large surpluses. Among delayed return hunter gatherers and horticulturalists, however, it would play a far larger role.

When we get to complex foragers and beyond, however, the ability to plan for the future becomes almost like a super-power. And here, we see a connection I will make between recursion and the Feasting Theory we’ve previously discussed. Simply put, an enhanced sense of future states allows one to more effectively ensnare people in webs of debt and obligation, which can then be leveraged to gain wealth and social advantage. I will argue that this is what allowed the primordial inequalities to form in various societies which could produce surpluses of wealth. It also demonstrates the evolutionary advantages of recursive thinking.

Corballis then ties together language and mental time travel. He posits that the recursive nature of language evolved specifically to allow us to share past and future experiences. It allows us to narratize our lives, and to tell that story to others, and perhaps more importantly, to ourselves.

Language allows us to construct things that don’t exist—shared fictions. It allows us to tell fictional stories of both the past and the future.

Episodic memories, along with combinatorial rules, allow us not only to create and communicate possible episodes in the future, but also to create fictional episodes. As a species, we are unique in telling stories. Indeed the dividing line between memory and fiction is blurred; every fictional story contains elements of memory, and memories contain elements of fiction…Stories are adaptive because they allow us to go beyond personal experience to what might have been, or to what might be in the future. They provide a way of stretching and sharing experiences so that we are better adapted to possible futures. Moreover, stories tend to become institutionalized, ensuring that shared information extends through large sections of the community, creating conformity and social cohesion. p. 124

The main argument … is that grammatical language evolved to enable us to communicate about events that do not take place in the here and now. We talk about episodes in the past, imagined or planned episodes in the future, or indeed purely imaginary episodes in the form of stories. Stories may extend beyond individual episodes, and involve multiple episodes that may switch back and forth in time. The unique properties of grammar, then, may have originated in the uniqueness of human mental time travel…Thus, although language may have evolved, initially at least, for the communication of episodic information, it is itself a robust system embedded in the more secure vaults of semantic and implicit memory. It has taken over large areas of our memory systems, and indeed our brains. p. 126


The mental faculties that allow us to locate, sort and retrieve events in time, are apparently use the same ones that we use to locate things in space. Languages have verb tenses that describe when things took place (although a few languages lack this ability). The ability to range at will over past, present and future gave rise to stories, which are often the glue that holds societies together, such as origin stories or tales of distant ancestors. Is the image above truly about moving forward in space, or is it about something else? What does it mean to say things like we “move forward” after a tragedy?

Different sets of grid cells form different grids: grids with larger or smaller hexagons, grids oriented in other directions, grids offset from one another. Together, the grid cells map every spatial position in an environment, and any particular location is represented by a unique combination of grid cells’ firing patterns. The single point where various grids overlap tells the brain where the body must be…Since the grid network is based on relative relations, it could, at least in theory, represent not only a lot of information but a lot of different types of information, too. “What the grid cell captures is the dynamic instantiation of the most stable solution of physics,” said György Buzsáki, a neuroscientist at New York University’s School of Medicine: “the hexagon.” Perhaps nature arrived at just such a solution to enable the brain to represent, using grid cells, any structured relationship, from maps of word meanings to maps of future plans.

The Brain Maps Out Ideas and Memories Like Spaces (Quanta)

It is likely that a dog, or even a bonobo, does not tell itself an ongoing “story” of it’s life. It simply “is”. If we accept narratization as an important feature of introspective self-consciousness, then we must accept the ability to tell ourselves these internal stories is key to the creation of that concept. But when did we acquire this ability? And is it universal? Clearly, it has something to do with the acquisition of language. And if we accept a late origin of language, it certainly cannot have arisen more than 70-50,000 years before present. To conclude, here is an excerpt from a paper Corballis wrote for the Royal Society:

the evolution of language itself is intimately connected with the evolution of mental time travel. Language is exquisitely designed to express ‘who did what to whom, what is true of what, where, when and why’…and these are precisely the qualities needed to recount episodic memories. The same applies to the expression of future events—who will do what to whom, or what will happen to what, where, when and why, and what are we going to do about it…To a large extent, then, the stuff of mental time travel is also the stuff of language.

Language allows personal episodes and plans to be shared, enhancing the ability to plan and construct viable futures. To do so, though, requires ways of representing the elements of episodes: people; objects; actions; qualities; times of occurrence; and so forth…The recounting of mental time travel places a considerable and, perhaps, uniquely human burden on communication, since there must be ways of referring to different points in time—past, present and future—and to locations other than that of the present. Different cultures have solved these problems in different ways. Many languages use tense as a way of modifying verbs to indicate the time of an episode, and to make other temporal distinctions, such as that between continuous action and completed action. Some languages, such as Chinese, have no tenses, but indicate time through other means, such as adverbs or aspect markers. The language spoken by the Pirahã, a tribe of some 200 people in Brazil, has only a very primitive way of talking about relative time, in the form of two tense-like morphemes, which seem to indicate simply whether an event is in the present or not, and Pirahã are said to live largely in the present.

Reference to space may have a basis in hippocampal function; as noted earlier, current theories suggest that the hippocampus provides the mechanism for the retrieval of memories based on spatial cues. It has also been suggested that, in humans, the hippocampus may encompass temporal coding, perhaps through analogy with space; thus, most prepositions referring to time are borrowed from those referring to space. In English, for example, words such as at, about, around, between, among, along, across, opposite, against, from, to and through are fundamentally spatial, but are also employed to refer to time, although a few, such as since or until, apply only to the time dimension. It has been suggested that the hippocampus may have undergone modification in human evolution, such that the right hippocampus is responsible for the retrieval of spatial information, and the left for temporal (episodic or autobiographical) information. It remains unclear whether the left hippocampal specialization is a consequence of left hemispheric specialization for language, or of the incorporation of time into human consciousness of past and future, but either way it reinforces the link between language and mental time travel.

The most striking parallel between language and mental time travel has to do with generativity. We generate episodes from basic vocabularies of events, just as we generate sentences to describe them. It is the properties of generativity and recursiveness that, perhaps, most clearly single out language as a uniquely human capacity. The rules governing the generation of sentences about episodes must depend partly on the way in which the episodes themselves are constructed, but added rules are required by the constraints of the communication medium itself. Speech, for example, requires that the account of an event that is structured in space–time be linearized, or reduced to a temporal sequence of events. Sign languages allow more freedom to incorporate spatial as well as temporal structure, but still require conventions. For example, in American sign language, the time at which an event occurred is indicated spatially, with the continuum of past to future running from behind the body to the front of the body.

Of course, language is not wholly dependent on mental time travel. We can talk freely about semantic knowledge without reference to events in time… However, it is mental time travel that forced communication to incorporate the time dimension, and to deal with reference to elements of the world, and combinations of those elements, that are not immediately available to the senses. It is these factors, we suggest, that were in large part responsible for the development of grammars. Given the variety of ways in which grammars are constructed, such as the different ways in which time is marked in different languages, we suspect that grammar is not so much a product of some innately determined universal grammar as it is a product of culture and human ingenuity, constrained by brain structure.

Mental time travel and the shaping of the human mind (The Royal Society)

Next time, we’ll take a look at another unique recursive ability of the human mind: the ability to infer the thoughts and emotions of other people, a.k.a. the Theory of Mind.

The Recursive Mind (Review) – 2

Part 1

1. Language

We’ve already covered language a bit already. A good example of language recursion is given by children’s rhymes, such as This is the House That Jack Built:

It is a cumulative tale that does not tell the story of Jack’s house, or even of Jack who built the house, but instead shows how the house is indirectly linked to other things and people, and through this method tells the story of “The man all tattered and torn”, and the “Maiden all forlorn”, as well as other smaller events, showing how these are interlinked…(Wikipedia)

“The House That Jack Built” plays on the process of embedding in English noun phrases. The nursery rhyme is one sentence that continuously grows by embedding more and more relative clauses as postmodifiers in the noun phrase that ends the sentence…In theory, we could go on forever because language relies so heavily on embedding.

The Noun Phrase (Papyr.com)

In English, clauses can be embedded either in the center, or at the end:

In “The House That Jack Built” clauses are added to the right. This is called right-embedding. Much more psychologically taxing is so-called center-embedding, where clauses are inserted in the middle of clauses. We can cope with a single embedded clauses as in:

“The malt that the the rat ate lay in the house that Jack built.”

But it becomes progressively more difficult as we add further embedded clauses:

“The malt[that the rat (that the cat killed) ate] lay in the house that Jack built.”

Or worse:

“The malt [that the rat (that the cat {that the dog chased} killed) ate] lay in the house that Jack built.

I added brackets in the last two examples that may help you see the embeddings, but even so they’re increasingly difficult to unpack. Center-embedding is difficult because words to be linked are separated by the embedded clauses; in the last example above, it was the malt that lay in the house, but the words malt and lay are separated by twelve words. In holding the word malt in mind in order to hear what happened to it, one must also deal with separations between rat and ate and between cat and killed…Center embeddings are more common in written language than in spoken language, perhaps because when language is written you can keep it in front of you indefinitely while you try to figure out the meaning….The linguistic rules that underlie our language faculty can create utterances that are potentially, if not actually, unbounded in potential length and variety. These rules are as pure and beautiful as mathematics…

The Truth About Language pp. 13-14

Or a song you may have sung when you were a child: “There Was an Old Lady who Swallowed a Fly.”

The song tells the nonsensical story of an old woman who swallows increasingly large animals, each to catch the previously swallowed animal, but dies after swallowing a horse. The humour of the song stems from the absurdity that the woman is able to inexplicably and impossibly swallow animals of preposterous sizes without dying, suggesting that she is both superhuman and immortal; however, the addition of a horse is finally enough to kill her. Her inability to survive after swallowing the horse is an event that abruptly and unexpectedly applies real-world logic to the song, directly contradicting her formerly established logic-defying animal-swallowing capability. (Wikipedia)

The structure can be expressed this way:

cow [goat (dog {cat [bird (spider {fly})]})] – after which, she swallows the horse and expires. The resulting autopsy would no doubt unfold a chain of events resembling a Matryoshka doll (or a Turducken).

Or yet another chestnut from my childhood: “There’s a Hole in My Bucket,” which is less an example of recursion that a kind of strange loop:

The song describes a deadlock situation: Henry has a leaky bucket, and Liza tells him to repair it. To fix the leaky bucket, he needs straw. To cut the straw, he needs an axe. To sharpen the axe, he needs to wet the sharpening stone. To wet the stone, he needs water. But to fetch water, he needs the bucket, which has a hole in it. (Wikipedia)

Whether all human languages have a recursive structure by default, or are at least capable of it, is one of the most controversial topics in linguistics.

Bringing more data to language debate (MIT News)

The idea that language is not just based on external stimulus, but is in some way “hard-wired” into the human brain was first developed by Noam Chomsky. He argued that this meant that grammatical constructions were somehow based on the brain’s inner workings (i.e. how the brain formulates thoughts internally), and therefore all languages would exhibit similar underlying structures, something which he called the “Universal Grammar.”

Furthermore, he argued that language construction at its most fundamental level could be reduced to a single recursive operation he called Merge. This was part of his so-called “Minimalist Program” of language construction.

Merge is…when two syntactic objects are combined to form a new syntactic unit (a set).

Merge also has the property of recursion in that it may apply to its own output: the objects combined by Merge are either lexical items or sets that were themselves formed by Merge.

This recursive property of Merge has been claimed to be a fundamental characteristic that distinguishes language from other cognitive faculties. As Noam Chomsky (1999) puts it, Merge is “an indispensable operation of a recursive system … which takes two syntactic objects A and B and forms the new object G={A,B}” (Wikipedia)

The Merge applies to I-language, the thinking behind language, whereas the language spoken out loud is translated into what he calls E-Language (E for external). Corballis explains:

The Merge operation…strictly hold for what Chomsky calls I-language, which is the internal language of thought, and need not apply directly to E-language, which is external language as actually spoken or signed. In mapping I-language to E-language, various supplementary principles are needed. For instance…the merging of ‘Jane loves John’ with ‘Jane flies airplanes’ to get, ‘Jane, who flies airplanes, loves John’ requires extra rules to introduce the word who and delete one copy of the word Jane.

I-language will map onto different E-languages in different ways. Chomsky’s notion of unbounded Merge, recursively applied, is therefore essentially an idealization, inferred from the study of external languages, but is not in itself directly observable. pp. 23-24

It’s notable that whatever the other merits of the Merge, it does appear to be a good description of how language is extended via metaphor. I recently ran across a good example of this: the word inosculation, meaning to homogenize, make continuous or interjoin. It’s root is the verb “to kiss”, which itself is derived from the word for “mouth.” This word, like so many others, was created through recursion and metaphor.

From in- +‎ osculate, from Latin ōsculātus (“kiss”), from ōs + -culus (“little mouth”).

The sheer diversity of human languages that have been found and studied has put Chomsky’s Universal Grammar theory on the cooler. There does not seem to be any sort of universal grammar that we can find, nor a universal method of thought which underlies it. A few languages have been discovered that do not appear to use recursion, most famously the Pirahã language of the Amazon, but also the Iatmul language of New Guinea and some Australian languages spoken in West Arnhem land. For example, the phrase “They stood watching us fight” would be rendered as “They stood/they were watching us/we were fighting.” in Bininj Gun-Wok (p. 27)

Recursion and Human Thought: A Talk with Daniel Everett (Edge Magazine)

It continues to be debated whether animals have a capacity for recursive expression. A study on 2006 argued that starling calls exhibited a recursive quality, but this has been questioned. As I mentioned earlier, it is often difficult to tell if something which may appear recursive is actually generated recursively.

Starlings vs. Chomsky (Discover)

Corballis argues here (as he has in other books, which I will also refer to), that the human mental capacity for language evolved first via gesticulation (hand gestures), rather than verbal sounds (speech). Only much later, he argues, did communication switch from primarily hand gestures to speech. “I have argued…that the origins of language lie in manual gestures, and the most language-like behavior in nonhuman species is gestural.” (p. 161) Some reasons he gives for believing this are:

1.) We have had extensive control over our arms, hands and fingers (as demonstrated by tool use and manufacture, for example) for a millions of years, but the fine motor control over our lungs and vocal tract required to produce articulate speech is of far more recent vintage. It is also unique to our species—other apes don’t have the control over the lungs or mouth required for speech. In fact, the unique control that humans possess over their breathing leads Corballis to speculate that distant human ancestors must have spent considerable time diving in water, which requires extensive breath control. Human babies, for example, instinctively know to hold their breath in water in a way that other apes—including our closest relatives—cannot. This leads him to endorse an updated version of the Aquatic Ape theory called the Littoral hypothesis, or the Beachcomber theory.

In an extensive discussion of the aquatic ape hypothesis and the various controversies surrounding it, Mark Verhaegen suggests that our apelike ancestors led when he calls an aquarborial life, on the borders between forest and swamp lands. There they fed on shellfish and other waterbourne foods, as well as on plants and animals in the neighboring forested area…In this view, the environment that first shaped our evolution as humans was not so much the savanna as the beach. During the Ice ages the sea levels dropped, opening up territory rich in shellfish but largely devoid of trees. Our early Pleistocene forebears dispersed along the coasts, and fossils have been discovered not only in Africa but as far away as Indonesia, Georgia, and even England. Stone tools were first developed not so much for cutting carcasses of game killed on land as for opening and manipulating shells. Bipedalism too was an adaptation not so much for walking and running as for swimming and shallow diving.

Verhaegen lists a number of features that seem to have emerged only in Pleistocene fossils, some of which are present in other diving species but not in pre-Pleistocene hominins. These include loss of fur; an external nose; a large head; and head, body, and legs all in a straight line. The upright stance may have helped individuals stand tall and spot shellfish in the shallow water. Later, in the Pleistocene, different Homo populations ventured inland along rivers and perhaps then evolved characteristics more suited to hunting land-based animals. The ability to run, for instance, seems to have evolved later in the Pleistocene. But Verhaegen suggest that, in fact, we are poorly adapted to a dry, savannalike environment and retain many littoral adaptations (that is, adaptations to coastal regions): “We have a water- and sodium-wasting cooling system of abundant sweat glands, totally unfit for a dry environment. Our maximal urine concentration is much too low for a savanna-dwelling mammal. We need much more water than other primates and have to drink more often than savanna inhabitants, yet we cannot drink large quantities at a time.

Part of the reason for our swollen brains may derive from a diet of shellfish and other fish accessible the shallow-water foraging [sic]. Seafood supplies docosahexaenoic acid (DHA), an omega 3 fatty acid, and some have suggested that it was this that drive the increase in brain size, reinforcing the emergence of language and social intelligence.

Michael A. Crawford and colleagues have long proposed that we still need to supplement our diets with DHA and other seafoods to maintain fitness. Echoing Crawford, Marcos Duarte issues a grim warning: “The sharp rise in brain disorders, which, in many developed countries, involves social costs exceeding those of heart disease and cancer combined, has been deemed the most worrying change in disease pattern in modern societies, calling for urgent consideration of seafood requirements to supply the omega 3 and DHA required for brain health.
The Truth About Language: What It Is and Where It Came From; pp. 95-97

2.) Chimpanzees appear to have little control over the types of sounds that they make. Vocalization in primates appears to be largely instinctual, and not under conscious control.

3.) Although apes such as chimpanzees, bonobos and gorillas cannot learn spoken language, they can be taught to communicate with humans using sign language. Apes have learned vocabularies of several thousand signed words, most notably Koko the gorilla, and the bonobo Kanzi.

Manual activity in primates is intentional and subject to learning, whereas vocalizations appear to be largely involuntary and fixed. In teaching great apes to speak, much greater success has been achieved through gesture and the use of keyboards than through vocalization, and the bodily gestures of apes in the wild are less contained by context than are their vocalizations. These observations strongly suggest that language evolved from manual gestures. p. 57



4.)
Mirror neurons are neurons in our brain that fire not in response to an action, but in response to watching someone else perform that action. They were first discovered in monkeys (sometimes called “monkey-see, monkey-do” neurons), but are present in all apes. These are part of a larger network of regions called the mirror system. It has been proposed that language grew out of this mirror system. The underlying idea is that, “[W]e perceive speech not in terms of the acoustic patterns it creates, but in terms of how we ourselves would articulate it.” (p. 61) This called the motor theory of speech perception. If this theory is true, it would point to an origin of language in gestural imitation rather than calls, which do not recruit mirror neurons in other primates.

The mirror system, in contrast to the primate vocalization system, has to do with intentional action, and is clearly modifiable through experience. For example, mirror neurons in the monkey brain respond to the sounds of certain actions, such as the tearing of paper or the cracking of nuts, and these responses can only have been learned. The neurons were not activated, though, by money calls, suggesting that vocalization itself is not part of the mirror system in monkeys…

…in the monkey, mirror neurons responded to transitive acts, as in reaching for an actual object, but do not respond to intransitive acts, where a movement is mimed and involves no object. In humans, by contrast, the mirror system responds to both transitive and intransitive acts, and the incorporation of intransitive acts would have paved the way to the understanding of acts that are symbolic rather than object-related…functional magnetic resonance imaging (fMRI) in humans shows that the mirror-neuron region of the premotor cortex is activated not only when they watch movements of the foot, hand, and mouth, but also when they read phrases pertaining to these movements. Somewhere along the line, the mirror system became interested in language. p. 62

5.) The anatomical structures in the mouth and throat required to produce something like human vocal patterns (phonemes) also came fairly late in human evolution. There is no evidence that even archaic humans could do it properly:

One requirement for articulate speech was the lowering of the larynx, creating a right-angled vocal tract that allows us to produce the wide range of vowels that characterize speech. Philip Lieberman has argued that this modification was incomplete even in the Neanderthals…Daniel Lieberman…had shown that the structure of the cranium underwent changes after we split with the Neanderthals. One such change is the shortening of the sphenoid, the central bone of the cranial base form which the face grows forward, resulting in a flattened face. The flattening may have been part of the change that created the right-angled vocal tract, with horizontal and vertical components of equal length. This is the modification that allowed us the full range of vowel sounds, from ah to oo.

Other anatomical evidence suggests that the anatomical requirements for fully articulate speech were probably not complete until late in the evolution of Homo. For example, the hypoglossal nerve, which innervates the tongue, is also is much larger in humans, perhaps reflecting the importance of tongued gestures in speech. The evidence suggests that the size of the hypoglossal canal in early australopithecenes, and perhaps in Homo habilis, was within the range of that in modern great apes, while that of the Neanderthal and early H. sapiens skulls was contained will within the modern human range, although this has been disputed.

A further clue comes from the finding that the thoracic region of the spinal cord is relatively larger in humans than in nonhuman primates, probably because breathing during speech involves extra muscles of the thorax and abdomen. Fossil evidence indicates that this enlargement was not present in early hominins or even in Homo eragaster, dating from 1.6 million years ago, but was present in several Neanderthal fossils.

Emboldened by such evidence…Phillip Lieberman has recently made the radical claim that “fully human speech anatomy first appears in the fossil record in the Upper Paleolithic (about 50,000 years ago) and is absent in both Neanderthals and earlier humans.” This provocative statement suggests that articulate speech emerged even later than the arrival of Homo sapiens some 150,000 to 200,000 years ago. While this may be an extreme conclusion, the bulk of evidence does suggest that autonomous speech emerged very late in the human repertoire…pp. 72-74

Primer: Acoustics and Physiology of Human Speech (The Scientist)

Interestingly, despite the anatomical evidence for a late development of language being fairly recent, Jaynes argued for a Pleistocene origin for speech in Homo sapiens (but not other archaic humans) back in 1976. He also implied that communication was unspoken, possibly through hand gestures much the way Corballis argues:

It is commonly thought that language is such an inherent part of the human constitution that it must go back somehow through the tribal ancestry of man to the very origin of the genus Homo, that is, for almost two million years. Most contemporary linguists of my acquaintance would like to persuade me that this is true. But with this view, I wish to totally and emphatically disagree. If early man, through these two million years, had even a primordial speech, why is there so little evidence of even simple culture or technology? For there is precious little archaeologically up to 40,000 B.C., other than the crudest of stone tools.

Sometimes the reaction to a denial that early man had speech is, how then did man function or communicate? The answer is very simple: just like all other primates with an abundance of visual and vocal signals which were very far removed from the syntactical language that we practice today. And when I even carry this speechlessness down through the Pleistocene Age, when man developed various kinds of primitive pebble choppers and hand axes, again my linguist friends lament my arrogant ignorance and swear oaths that in order to transmit even such rudimentary skills from one generation to another, there had to be language.

But consider that it is almost impossible to describe chipping flints into choppers in language. The art was transmitted solely by imitation, exactly the same way in which chimpanzees transmit the trick of inserting straws into ant hills to get ants. It is the same problem as the transmission of bicycle riding: does language assist at all?

Because language must make dramatic changes in man’s attention to things and persons, because it allows a transfer of information of enormous scope, it must have developed over a period that shows archaeologically that such changes occurred. Such a one is the late Pleistocene, roughly from 70,000 B.C. to 8000 B.C. This period was characterized climatically by wide variations in temperature, corresponding to the advance and retreat of glacial conditions, and biologically by huge migrations of animals and man caused by these changes in weather. The hominid population exploded out of the African heartland into the Eurasian subarctic and then into the Americas and Australia. The population around the Mediterranean reached a new high and took the lead in cultural innovation, transferring man’s cultural and biological focus from the tropics to the middle latitudes. His fires, caves and furs created for a man a kind of transportable microclimate that allowed these migrations to take place.

We are used to referring to these people as late Neanderthalers [sic]. At one time they were thought to be a separate species of man supplanted by Cor-Magnon man around 35,000 B.C. But the more recent view is that they were part of the general human line, which had great variation, a variation that allowed for an increasing pace of evolution, as man, taking his artificial climate with him, spread into these new ecological niches. More work needs to be done to establish the true patterns of settlement, but the most recent emphasis seems to be on its variation, some group continually moving, others making seasonal migrations, and others staying at a site all the year round.

I am emphasizing the climate changes during this last glacial age because I believe these changes were the basis of the selective pressures behind the development of language through several stages. OoCitBotBM; pp. 129-131

Thus, Jaynes falls into the camp that argues that language was the decisive factor in the transition to behavioral modernity as seen in the archaeological record (as do many others). This would also explain the relative stasis of cultures like that of Homo erectus, whose tools remained basically unchanged for thousands of years and had no signs of art, music, or any other kind of abstract thinking.

6.) People using sign language utilize the exact same areas of the brain (as shown by fMRI scans, for example) as people engaged in verbal speech.

Even in modern humans, mimed action activates the brain circuits normally thought of as dedicated to language…activities elicited activity in the left side of the brain in frontal and posterior areas–including Broca’s and Wernicke’s areas–that have been identified since the nineteenth century was the core of the language system…these areas have to do, not just with language, but with the more general linking of symbols to meaning, whether the symbols are words, gestures, images, sounds, or objects….We also know that the use of signed language in the profoundly deaf activates the same brain areas that are activated by speech…p. 64

7.) Hand gestures do not require linearalization. Corballis gives the example of an elephant and a woodshed. While some words do sound like what they describe (onomatopoeic words), most do not. In fact, they cannot. Thus, it would be difficult for sounds alone to distinguish between things such as elephants and woodsheds. Gestures, however, are much less limited in their descriptiveness.

Speech…requires that the information be linearalized, piped into a sequence of sounds that are necessarily limited in terms of how they can capture the spatial and physical natures of what they represent…Signed languages are clearly less constrained. The hands and arms can mimic the shape of real-world objects and actions, and to some extent lexical information can be delivered in parallel instead of being forced into a rigid temporal sequence. With the hands, it is almost certainly possible to distinguish an elephant from a woodshed, in purely visual terms. pp. 65-66

But see this: Linguistic study proves more than 6,000 languages use similar sounds for common words (ABC)

Over time, sounds may have supplemented hand gestures because they are not dependent on direct lines of sight. They can also transmit descriptive warning calls more effectively (“Look out, a bear is coming!”). Corballis speculates that facial gestures became increasingly incorporated with manual gestures over time, and that these facial gestures eventually also became combined with rudimentary sounds. This was the platform for the evolution of speech. Finally, freeing up the hands completely from the need for communication would have allowed for carrying objects and tool manufacture that was simultaneous with communication.

The switch, then, would have freed the hands for other activities, such as carrying and manufacture. It also allows people to speak and use tools at the same time. It might be regarded, in fact, as an early example of miniaturization, whereby gestures are squeezed from the upper body to the mouth. It also allows the development of pedagogy, enabling us to explain skilled actions while at the same time demonstrating them, as in a modern television cooking show. The freeing of the hands and the parallel use of speech may have led to significant advances in technology, and help explain why humans eventually predominated over the other large-brained hominins, including the Neanderthals, who died out some 30,000 years ago. p. 78

Incidentally, miniaturization, or at least the concept of it, also played a critical role in tool development for Homo sapiens: From Stone Age Chips to Microchips: How Tiny Tools Made Us Human (Emory University)

Eventually, speech supplanted gesture as the dominant method of communication, although hand gestures have never completely gone away, as exemplified by mimes, deaf people, and Italians. Gestures, such as pointing, mimicking, and picking things up, are all still used during the acquisition of language, as any teacher of young children will attest.

Why apes can’t talk: our study suggests they’ve got the voice but not the brains (The Conversation)

The Recursive Mind (Review) – 1

Pink Floyd does recursion

I first learned about recursion in the context of computer programming. The output of some code was fed back as an input into the same code. This kept going until some criteria was met. I’m sure every novice programmer has made the mistake where the criteria was not specified, or specified incorrectly, leading to an eternal loop. It’s practically a rite of passage in learning programming.

I would be remiss not to quote the poet laureate of recursion, Douglas Hofstaedter:

WHAT IS RECURSION? It is…nesting, and variations in nesting. The concept is very general. (Stories inside stories, movies inside movies, paintings inside paintings, Russian dolls inside Russian dolls (even paranthetical comments inside paranthetical comments!)–these are just a few of the charms of recursion.)…

Sometimes recursion seems to brush paradox very closely. For example, there are recursive definitions. Such a definition may give the casual viewer the impression that something is being defined in terms of itself. That would be circular and lead to infinite regress, if not to paradox proper. Actually, a recursive definition (when properly formulated) never leads to infinite regress or paradox. This is because a recursive definition never defines something in terms of itself, but always in terms of simpler versions of itself. GEB, Chapter V

Here’s another great example of recursion: a commemorative plaque in Toronto commemorating its own installation: A recursive plaque honoring the installation of a plaque honoring the installation of a plaque honoring the installation of…(BoingBoing)

This commemorative plaque commemorates its own dedication which commemorates its own dedication which commemorates…

Thus, I will define recursion for our purposes as the nesting of like within like. Or, rules that can apply to their own output. A common image used to show this is the Russian Matryoshka dolls, which adorn the cover of The Recursive Mind by Michael C. Corballis, the book we’ll be considering today.

These dolls work in a pretty interesting way. Within each one, there is another doll that is exactly the same. You have multiple copies of the same doll, each within another, until eventually, you get to the smallest doll.

To Understand Recursion, You Must First Understand Recursion (Words and Code)

Another example is what’s called the Droste effect, after this can of Droste’s Cacao which references itself (which references itself, and…). This effect has subsequently been replicated in a number of product packages.

Another definition is, “a procedure which calls itself, or…a constituent that contains a constituent of some kind.” Thus, recursion can be understood as both a process and a structure.

In linguistics, recursion is the unlimited extension of language. It is the ability to embed phrases within phrases and sentences within sentences resulting in the potential of a never-ending sentence.

The Recursiveness of Language – A Linkfest (A Walk in the WoRds)

You can even have a book within a book—such as, for example, The Hipcrime Vocab, the book referenced inside John Brunner’s Stand on Zanzibar, from which this blog takes its name.

Often recursive processes produce recursive structures. Not always, though. For example, an iterative structure can be derived from a recursive process. Something like

AAAAAAABBBBBB can be generated using a recursive procedure. You just nest the AB’s like so:

(A(A(A(A(AB)B)B)B)B)

But—and this turns out to be important—there is nothing in the above structure that indicates it must have been generated recursively. You could just have a series of A’s followed by a series of B’s. This may seem like a trivial point, but what it means is that there could be recursion behind something that does not seem recursive. And the reverse is also true—something might look recursive, but be generated via non-recursive means. The AB sequence shown above could be generated either way. This means that some apparent examples of recursion might actually be something else, such as repetition or iteration. As we’ll see, this means it can be quite tricky to determine whether there truly are examples of recursive thought in non-human animals or human ancestors.

Let’s start with a simple linguistic language. Let’s say I take a simple noun-verb phrase like, The dog has a ball. Let’s add another basic noun-verb phrase about the dog: The dog is brown. Each of these are standalone ideas. But I can nest them inside one another this way: The dog who is brown has a ball, or The brown dog has a ball, or The brown dog’s ball, etc.

Then let’s add this fact: The dog belongs to Erik. Therefore, Erik’s brown dog has a ball. Let’s say it’s my ball. Erik’s brown dog has my ball. Maybe the dog is barking at me right now. Erik’s brown dog, who has my ball, is barking at me right now. Do you get it? You get that Erik’s brown dog who has my ball is barking at me right now.

Anyway, we could go on doing this all day, but I think you get the point. Recursive structures can theoretically go on until infinity, but in reality are constrained. After all, there’s only so much time in the day. Corballis explains that recursive constructions need not involve embedding of exactly the same constituents, but constituents of the same kind—a process known as self-similar embedding. He gives the example of noun phrases. For example, Nusrat Fateh Ali Khan’s first album was entitled, “The Day, The Night, The Dawn, The Dusk” (you can listen to it here). That’s basically four noun phrases. From these constituents, one can make a new noun phrase like “The day gives way to night.” or perhaps a movie title like From Dusk Till Dawn.”

Recursive language is needed to express recursive thoughts, and here’s a good case to be made that recursive thought is the key to our unique cognitive abilities. This is exactly the case Corballis makes.

…recursive processes and structures can in principle extend without limit, but are limited in practice. Nevertheless, recursion does give rise to the *concept* of infinity, itself perhaps limited to the human imagination. After all, only humans have acquired the ability to count indefinitely, and to understand the nature of infinite series, whereas other species can at best merely estimate quantity, and are accurate only up to some small finite number. Even in language, we understand that a sentence can in principle be extended indefinitely, even though in practice it cannot be–although the novelist Henry James had a damn good try…

Corbalis mentioned Henry James, above, and below is his longest sentence. Click on the link to see its structure diagrammed, whereupon you can see the recursive (embedded) nature of his language more clearly.

“The house had a name and a history; the old gentleman taking his tea would have been delighted to tell you these things: how it had been built under Edward the Sixth, had offered a night’s hospitality to the great Elizabeth (whose august person had extended itself upon a huge, magnificent and terribly angular bed which still formed the principal honour of the sleeping apartments), had been a good deal bruised and defaced in Cromwell’s wars, and then, under the Restoration, repaired and much enlarged; and how, finally, after having been remodelled and disfigured in the eighteenth century, it had passed into the careful keeping of a shrewd American banker, who had bought it originally because (owing to circumstances too complicated to set forth) it was offered at a great bargain: bought it with much grumbling at its ugliness, its antiquity, its incommodity, and who now, at the end of twenty years, had become conscious of a real aesthetic passion for it, so that he knew all its points and would tell you just where to stand to see them in combination and just the hour when the shadows of its various protuberances—which fell so softly upon the warm, weary brickwork—were of the right measure.” (James 2003, 60)

The Henry James Sentence: New Quantitative Approaches (Jonathan Reeve)

The appealing aspect of recursion is that it can in principle extend indefinitely to create thoughts (and sentences) of whatever complexity is required. The idea has an elegant simplicity, giving rise to what Chomsky called “discrete infinity,” or Wilhelm Humboldt famously called “the infinite use of finite means.” And although recursion is limited in practice, we can nevertheless achieve considerable depths of recursive thought, arguably unsurpassed in any other species. In chess, for example, a player may be able to think recursively three or four steps ahead, examining possible moves and countermoves, but the number of possibilities soon multiplies beyond the capacity of the mind to hold them.

Deeper levels of recursion may be possible with the aid of writing, or simply extended time for rehearsal and contemplation, or extended memory capacity through artificial means. The slow development of a complex mathematical proof, for example, may require subtheorems within subtheorems. Plays or novels may involve recursive loops that build slowly—in Shakespeare’s Twelfth Night, for example, Maria foresees that Sir Toby will eagerly anticipate that Olivia will judge Malvolio absurdly impertinent to suppose that she wishes him to regard himself as her preferred suitor. (This is recursive embedding of mental states, in that Sir Toby’s anticipation is embedded in what Maria foresees, Olivia’s judgement is embedded in what Sir Toby anticipates, and so on).

As in fiction, so in life; we all live in a web of complex recursive relationships, and planning a dinner party may need careful attention of who things what of whom. pp. 8-9

We do indeed live in a web of complex social relationships, but some of us live in a more complex web than others. A small village in the jungle is vastly different than a medieval free city, and certainly different from a modern city of millions of people. Similarly, where one lives and what one does for a living have an effect also. A politician or businessman lives in a much more complex social world than a painter or a ratcatcher. People who grow up in a village where everyone is related to another have a much easier cognitive task than a traveling salesman, or an international diplomat.

I point all this out to prepare the way for an argument I’m going to make later on, which is my own, but loosely based on ideas from Julian Jaynes. I’m going to make the case that increasing social complexity in human societies over time selected for recursive thinking abilities. I will also argue that such abilities led to the creation of things like writing and mathematics, which emerged only several thousand years ago, and were initially the province of a small number of elites (indicating that such abilities may be quite recent). I will also argue that recursive thinking allowed for advanced organization and planning abilities, which early leaders used to justify their elevated social status. Furthermore, I will argue that the type of “reflective self” that Jaynes saw developing during the Axial Age was due to increasingly recursive modes of thought. It was not caused by social breakdown, but rather by the increasing cognitive challenges demanded by social structures, as opposed to the primarily environmental challenges that earlier humans faced. This should become clearer as we discuss the social benefits of recursive thinking below.

In other words, consciousness did not arise so much from the breakdown of the bicameral mind, as it did from the rise of the recursive mind. That’s my argument, anyway.

As recursive thinking advanced, so too did the abilities which Jaynes notes as giving rise to the construction of the reflective, vicarial self—extended metaphor, mental time travel, higher-order theory of mind, and so on, as we’ll see. The lack or paucity of recursive thought, in contrast, prior to this period, is what prevented reflective self-consciousness (or, in Jaynes’s parlance, “consciousness”) from developing. Thus my timeline is similar to Jaynes’s, as are the conclusions, but the underlying reasons differ. We’ll get into this in more depth later.

An example of the infinitely extensible nature of language a novelties like one-sentance novels. of which there are a surprisingly large amount. Here is a good review of three of the best ones where the review itself is written as a single sentence (providing yet another example of recursion!):

Awe-Inspiring One-Sentence Novels You Never Knew Existed (The GLOCAL Experience)

In 2016, an Irish novelist won a literary prize for a one-sentence novel. To me, this novel is exemplary of the kinds of recursive thinking we’re describing here, and how it’s necessary to construct the vicarial self (the Analog ‘I’ and Metaphor ‘me’). The novel demonstrates not only a highly embedded (recursive) sentence, but mental time travel, and advanced theory of mind (the ability to extrapolate the mental states of other characters by inserting oneself into their experience; a requirement of good fiction), and autobiographical narratization (about which, more below). We’ll cover each of these concepts in more depth:

It stutters into life, like a desperate incantation or a prose poem, minus full-stops but chock-full of portent: “the bell / the bell as / hearing the bell as / hearing the bell as standing here / the bell being heard standing here / hearing it ring out through the grey light of this / morning, noon or night”…The speaker hearing the bell is one Marcus Conway, husband, father and a civil engineer in some small way responsible for the wild rush of buildings, roads and bridges that disrupted life in Ireland during the boom that in the book has just gone bust. Marcus is a man gripped by “a crying sense of loneliness for my family”. We don’t quite know why until the very end of the novel, which comes both as a surprise and a confirmation of all that’s gone before.

Among its many structural and technical virtues, everything in the book is recalled, but none of it is monotonous. Marcus remembers the life of his father and his mother, for example, a world of currachs and Massey Fergusons. He recalls a fateful trip to Prague for a conference. He recalls Skyping his son in Australia, scenes of intimacy with his wife, and a trip to his artist daughter’s first solo exhibition, which consists of the text of court reports from local newspapers written in her own blood, “the full gamut from theft and domestic violence to child abuse, public order offences, illegal grazing on protected lands, petty theft, false number plates, public affray, burglary, assault and drunk-driving offences”. Above all, he remembers at work being constantly under pressure from politicians and developers, “every cunt wanting something”, the usual “shite swilling through my head, as if there weren’t enough there already”. He recalls when his wife got sick from cryptosporidiosis, “a virus derived from human waste which lodged in the digestive tract, so that […] it was now the case that the citizens were consuming their own shit, the source of their own illness”.

Single sentence novel wins Goldsmiths prize for books that ‘break the mould’ (The Guardian)

Solar Bones by Mike McCormack review – an extraordinary hymn to small-town Ireland (The Guardian)

In the example above, we can see how recursive thought is intrinsically tied to self-identity, which is in turn connected with episodic memory, which is also tied to recursion as we will see. In brief, I will argue that recursive thought is tied to the kind of reflective self-consciousness that Jaynes was describing, and, as such, we are not as much concerned with the beginnings of language as the origin of consciousness, as much as the beginnings of recursive thought as the beginning of consciousness (as I will argue). It is quite possible for spoken language to have existed for communicative purposes for thousands of years prior to recursive thought and its subsequent innovations.

I focus on two modes of thought that are recursive, and probably distinctively human. One is mental time travel, the ability to call past episodes to mind and also to imagine future episodes. This can be a recursive operation in that imagined episodes can be inserted into present consciousness, and imagined episodes can even be inserted into other imagined episodes. Mental time travel also blends into fiction, whereby we imagine events that have never occurred, or are not necessarily planned for the future. Imagined events can have all of the complexity and variability of language itself. Indeed I suggest that language emerged precisely to convey this complexity, so that we can share our memories, plans and fictions.

The second aspect of thought is what has been called theory of mind, the ability to understand what is going on in the minds of others. This too, is recursive. I may know not only what you are thinking, but I may also know that you know what I am thinking. As we shall see, most language, at least in the form of conversation, is utterly dependent on this capacity. No conversation is possible unless the participants share a common mind-set. Indeed, most conversation is fairly minimal, since the thread of conversation is largely assumed. I heard a student coming out of a lecture saying to her friend, “That was really cool.” She assumed, probably rightly, hat her friend knew exactly what “that” was, an what she meant by “cool.” pp. ix-x

It goes beyond that, however. Later, we’ll look at work by Robin Dunbar which suggests that both organized religion and complex kinship groupings are dependent upon these same recursive thought processes. Given how important these social structures are to human dominance of the planet (perhaps the most important), we can see that recursion might be the skeleton key to all the things that make us uniquely human. This is especially true given the evidence (although it is disputed) that our predecessor species (i.e. Archaic Humans and earlier) were unable to engage in this kind of thinking.

Is Religion Merely A Cognitive Error?

One reason I’m intrigued by Jaynes’s idea is that it’s simply hard to explain the centrality of religion to ancient societies without recourse to something more than simple “cognitive errors.” After all, religion is costly. Think of all the time and energy that went into worshiping–whether it is elaborate rituals, lavish burials with grave goods, tombs, barrows and tumuli, sacrifices of both people and animals, dances and festivals, elaborate paintings and sculpture, and, of course, temples. Why didn’t atheistic societies take over societies that wasted huge amounts of resources in this way?

The conventional wisdom is that religion was necessary for group cohesion in the days before bureaucracy, written documents, centralized government, and related institutions. But something about that seems inadequate to me. Does one need to build pyramids to have a cohesive society? Does one need to bury their ruler with hundreds of terracotta warriors? Think of all the fantastic works of art, sculpture, and craftsmanship that were made simply to be sealed up in the tombs of Egypt and elsewhere. Think of all the craftsmanship that went into something like Tutankhamen’s death mask, for example. Even as far back as 34,000 years ago, people were burying some of their most labor-intensive goods in the ground.

Another school of thought has it down as just a massive case of collective denial. While denial is not just a river in Egypt, it is near that river that we see some of it’s most impressive manifestations. The idea is that by building structures that last longer than we do, we transcend death–that is, we conquer, in some sense, our own mortality. But why does everyone else go along with this? Were the workers just as motivated to deny their own death by working on the pyramids, despite no one knowing who they were?

Why not put all that effort into making real warriors and stone fortifications and take over one’s more superstitious neighbors bowing down to graven idols? Why not trade your highest quality stuff in markets instead of burying it or sealing it up forever in some tomb?

And that’s before we consider all the other strange behaviors. I’ve previously mentioned trepanation. From my (albeit limited) research, the two types of people who poke holes in their heads in modern times are these: voice hearers and LSD trippers. And what’s up with all the sacrifices?

Tower of human skulls found in Mexico City dig casts light on Aztec sacrifices (The Guardian)

Bowls of Fingers, Baby Victims, More Found in Maya Tomb (National Geographic)

When you start studying this stuff in depth, you realize that pretty much everything flowed from primitive religion in some way: politics, laws, marriage customs, inheritance, economic relationships, business partnerships, child-rearing, the status of women, family structures, and so on. Essentially, all laws and politics stemmed from religion. Huge amounts of social effort went into appeasing the gods. That’s one hell of a cognitive error!

Just how essential religion was to ancient cultures is summed up by this passage from The Ancient City:

A comparison of beliefs and laws shows that a primitive religion constituted the Greek and Roman family, established marriage and paternal authority, fixed the order of relationship, and consecrated the right of property, and the right of inheritance. This same religion, after having enlarged and extended the family, formed a still larger association, the city, and reigned in that as it had reigned in the family.

From [religion] came all the institutions, as well as all the private law, of the ancients. It was from this that the city received all its principles, its rules, its usages, and its magistracies. But, in the course of time, this ancient religion became modified or effaced, and private law and political institutions were modified with it. Then came a series of revolutions, and social changes regularly followed the development of knowledge.

It is of the first importance, therefore, to study the religious ideas of these peoples, and the oldest are the most important for us to know. For the institutions and beliefs which we find at the flourishing periods of Greece and Rome are only the development of those of an earlier age; we must seek the roots of them in the very distant past.

E. E. Evans-Pritchard summarized de Coulanges’ thesis this way:

The theme of The Ancient City is that ancient classical society was centred in the family in the wide sense of that word— joint family or lineage — and that what held this group of agnates together as a corporation and gave it permanence was the ancestor cult, in which the head of the family acted as priest.

In the light of this central idea, and only in the light of it, of the dead being deities of the family, all customs of the period can be understood: marriage regulations and ceremonies, monogamy, prohibition of divorce, interdiction of celibacy, the levirate, adoption, paternal authority, rules of descent, inheritance and succession, laws, property, the systems of nomenclature, the calendar, slavery and clientship, and many other customs. When city states developed, they were in the same structural pattern as had been shaped by religion in these earlier social conditions.

Traditions are basically dead people peer pressuring us. (Reddit Showerthoughts)

What appears to tie all of these together is ritual ancestor worship, also called veneration of the dead, ancestral veneration, or the cult of the dead. An ancestor cult is simply defined as, “The continuing care of the dead under the assumption of their power”. And you see this emerging as religion all larger, complex societies, from the New World to the Classical World to India to China to Indonesia. In China, especially, ancestral veneration was central to religious practice until relatively modern times, existing alongside philosophies like Taoism and Buddhism. In all of these societies, there seems to have been two parallel worships: the ancestor cult and a pantheon of deities who had some kind of power over the natural world.

Another thing you see repeatedly is the idea of a “layered world,” most likely derived from Shamanistic practices. There are always a minimum of three: the central world inhabited by humans, a lower world inhabited by the dead, and an upper world inhabited by gods. Some cosmologies add more–there are nine words in Norse cosmology, for example. There is also some sort of connector between the worlds. In Norse mythology, it was the world tree, Yggdrasil; in China is was the Celestial Pole. Many of these religions, especially those of early complex societies like ancient Egypt, Babylonia, China, and the Mayans, have a clear astrological basis as well: “Chinese theology may be also called Tiānxué 天學 (“study of Heaven”), a term already in use in the 17th and 18th century.” (Wikipedia)

The sheer universality of this phenomenon must have some sort of significance. Why do so many ancient societies worship their dead? Does it have something to do with the fact that, according to scientific surveys, a huge amount of people report hearing, feeling, or even seeing their dead relatives during the grieving process? If you ask, me, there’s been far too little overlap between anthropology and psychology.

I’m struck by just how similar Eurasian practices are among cultures that could not have possibly acquired them by cultural diffusion. For example, I was listening to a TS podcast with a Balinese art expert. He pointed out that although Bali is known for Hinduism, what’s lesser-known is that the original religion of Bali was ancestor worship, which is still practiced in villages. In this tradition, families need to pay for elaborate funerary rites, and make continuing offerings to appease the dead spirits.

“…I kept saying ‘Who you worshiping, Brahma, Vishnu or Shiwa?’ And then they answered [Jiro Gde?]…So [Jiro?] means elevated, and Gde means the Great One. It’s a term that probably is more descended from the animist period in the worship of great nature spirits. So the next question is, ‘Why are you doing this ceremony?’ There response was also, like, ‘What do you ask such stupid questions for?’ I pressed them and pressed them. ‘Because, we always do it.’ It was, of course, part of an ancient religious cycle, and ceremonial cycle, ritual cycle, that had been going on for centuries, and nobody questioned the validity or reason. It was obligatory. It was you did it because you had to do it.”

“Another thing that many people don’t understand about the Balinese system of ancestor worship, which is also related to the tribal groups is that, the major purpose of cremation, and why cremations are joyous events, is to send off the spirit of the deceased to the land of the ancestors. And the reason you want to do it, because before you’ve successfully fulfilled this very important ritual in the human life cycle, their spirits hang around here and earth. And the longer and more dissatisfied they are, the more trouble they can bring…all kinds of bad things. So basically, you want to get rid of them. You want send them off in a glorious way so they’re happy.”

“And it doesn’t end there. It’s not like you just send them away. It’s like having somebody who becomes a member of Congress. You have a symbiotic relationship. And the symbiotic relationship is you constantly have to give offerings and the temple and do all sort of things. They become the representatives of the family here in the celestial realm, and because of them, they bring good luck and blessings and prevent disasters from happening. So, in a certain sense, it’s a payoff religion. And this is true of most of the traditional societies in Indonesia.”

“For instance, the cremation here. Before there was cremation–you can see it in Pejeng, an area near Ubud, where they have the most ancient bronze age stone sarcophagi–they used to bury them there. That’s the secondary burial. The first one is because cremation and secondary burials like the ones in [Taraja?] are extremely expensive. It can bankrupt families. You have to borrow money and they’re very, very demanding. Balinese religion is a really demanding religion. Bali has the highest rate of suicide in Indonesia, and it is because of the religion. They’re constantly having to borrow money; they’re running from one debt to another debt…” [45:40]

Surprisingly, I couldn’t find much about Balinese ancestor worship online, but one snippet I did find is below from a book called The Anthropological Romance of Bali:

Relatively corporate ancestor-groups are optional in Balinese social structure and are actualized by building a high-level (supra-household) temple, often complemented by making intratemple marriages – for example, father’s-brother’s daughter. As the congregation supporting an ancestor’s temple expands, genealogical connections become obscure: outsiders might even be admitted if costs and upkeep grow burdensome; traditions of an ideal descent line may, however, persist. Yet the social integration of the group rests more on its temple duties per se and marriages between its members. According to high-caste traditions the ideal conveyors of a group’s identity and status are eldest sons of eldest sons, especially if they are born of a marriage with a near patrikinswoman.

Emphasis on eldest lines is an optional aspect of Balinese descent. Rules for actual inheritance of house property range from primogeniture to ultimogeniture, and every son assumes particular ceremonial responsibilities for ancestral shrines according to the share of productive fields and other material wealth received after the father’s death.

It is in certain textual traditions – the special province of royal houses, but imitated by ascendant commoner groups – that emphasis falls on eldest sons. And eldest sons on the eldest agnatic line who is also the offspring of a patricousin marriage is enhanced in and of his descent; from birth he would be expected to be individually meritorious in keeping with this auspicious genealogy.

But occupants of the most highly regarded genealogical positions are not necessarily bearers of the most elaborate legends. Practical leadership of a group often falls to members not automatically qualified by descent. More pragmatic qualities take precedence, and the figures of actual leaders are them apt to be embellished, almost apologetically, with posthumous legends, stories, and anecdotes to show why it was – actual genealogical position notwithstanding – that they succeeded to leadership.

Compare this to various passages from The Ancient City giving a desciption of the Graeco-Roman veneration of the dead and the social organization that flowed from it:

The father ranks first in presence of the sacred fire. He lights it, and supports it; he is its priest. In all religious acts his functions are the highest; he slays the victim, his mouth pronounces the formula of prayer which is to draw upon him and his the protection of the gods. The family and the worship are perpetuated through him; he represents, himself alone, the whole series of ancestors, and from him are to proceed the entire series of descendants. Upon him rests the domestic worship. He can almost say, like the Hindu, “I am the god.” When death shall come, he will be a divine being whom his descendants will invoke. p. 69

[The] son had also his part in the worship; he filled a place in the religious ceremonies; his presence on certain days was so necessary that the Roman who had no son was forced to adopt a fictitious one for those days, in order that the rites be performed. And here religion established a very powerful bond between father and son. They believed in a second life in the tomb–a life happy and calm if the funeral repasts were regularly offered. Thus the father is convinced that his destiny after this life will depend on the care that his son will take care of his tomb, and the son, on his part, is convinced tat his father will become a god after death, who he will have to invoke…

The old religion established a difference between the older and the younger son. “The oldest,” said the ancient Aryas, “was begotten for the accomplishment of the duty due the ancestors; the others are the fruit of love.” In virtue of this original superiority, the oldest had the privilege, after the death of the father, of presiding at all the ceremonies of domestic worship; he it was who offered the funeral repast, and pronounced the formulas of prayer: “for the right of pronouncing the prayers belongs to that son who came into the world first.” The oldest was, therefore, heir to the hymns, the continuator of the worship, the religious chief of the family. From this creed flowed a rule of law: the oldest alone inherited property. Thus says an ancient passage, which the last editor of the Laws of Manu still inserted in the code: “The oldest takes possession of the whole patrimony, and the older brothers live under his authority as if they were under that of their father. The oldest son performs the duties towards the ancestors; he ought, therefore, to have all.”

Greek law is derived from the same religious beliefs as Hindu Law; it is not astonishing, then, to find here also the right of primogeniture. Sparta preserved it longer than other Greek cities, because the Spartans were longer faithful to old institutions; among them patrimony was indivisbile, and the younger brothers had no part of it. It was the same with many of the ancient codes that Aristotle had studied. He informs us, indeed, that the Theban code prescribed absolutely that the number of lots of land should remain unchangeable which certainly excluded the division among brothers. An ancient law of Corinth also provided that the number of families should remain invariable, which could only be the case where the right of the oldest prevented families from becoming dismembered in each generation…

Sometimes the younger son was adopted into a family, and inherited property there, sometimes he married an only daughter; sometimes, in fine, he received some extinct family’s lot of land. When all these resources failed, younger sons were sent out to join a colony. pp. 66-67

It is clearly evident that private property was an institution that the domestic religion had need of. This religion required that both dwellings and burying-places should be separate from each other; living in common was, therefore impossible. The same religion required that the hearth should be fixed to the soil, that the tomb should neither be destroyed nor displaced. Suppress the right of property, and the sacred fire would be without a fixed place, the families would become confounded, and the dead would be abandoned and without worship. By the stationary hearth and the permanent burial-place, the family took possession of the soil; the earth was in some sort imbued and penetrated by the religion of the hearth and of ancestors.

Thus the men of the early ages were saved the trouble of resolving too difficult a problem. Without discussion, without labor, without a shadow of hesitation, they arrived, at a single step and merely by virtue of their belief, at the conception of the right of property; this right from which all civilization springs, since by it man improves the soil and becomes improved himself. Religion, and not laws, first guaranteed the right of property. Every domain was under the eyes of household divinities, who watched over it…pp. 52-53

Thanks to the domestic religion, the family was a small organized body; a little society, which had its chiefs and its government. Nothing in modern society can give us an idea of this paternal authority. In primitive antiquity the father is not alone the strong man, the protector who has power to command obedience; he is the priest, he is heir to the hearth, the continuator of the ancestors, the parents stock of the descendents, the depository of the mysterious rites of the worship, and of the sacred formulas of prayer. The whole religion resides in him. p. 71


While each family had their own religion based on their ancestors, so too did each tribe have it own ancestral worship, leading to a sort of fractal, or recursive, organization of society around religion. In anthro jargon, these religions formed pantribal sodalities. Here is a description of the earliest forms of Chinese ancestor worship by Sir Leonard Wooley:

In the religions of latter-day China a very prominent part is played by ancestor worship. Since ancestor worship is wholly alien to Buddhism in its pure form as taught by Buddha, and since it is not included in the teaching (which is more philosophical than religious) of Lao Tzu, the founder of Taoism, its origin has to be sought elsewhere, and recent discoveries have proved that it is far older than any one of the systems which have been engrafted on it and must be accounted as a survival from the earliest days of Chinese civilization.

According to that belief a man’s real power began when he died. Death transformed the mortal man into a spirit, possessed of undefined but vast powers where his descendants were concerned. While not quite omniscient or omnipotent, the spirits could grant, or withhold, success in hunting, in agriculture, in war or in anything else, and they could punish those who failed to please them with famine, defeat, sickness or death; so awful were they that it was dangerous even to pronounce the personal names they had borne in life, and they were designated by their relationship and the day on which they were born or died, as “Grandfather Tuesday”, “Elder-brother Saturday”, and so on.

To the dead, then offerings had to be made, both at the time of burial and afterwards, so long as the family remained. The dead man, wrapped, apparently, in matting, was laid in the grave with such furniture as his relatives could afford–in the case of the very poor with a few pottery vessels and perhaps a bronze dagger-axe, while an official of high rank might have a profusion of beautifully cast decorated bronze vessels. These were genuine objects, not the crude copies which in later times were specifically manufactured for burial purposes, nor the flimsy paper imitations of still more recent days; the Shang people seem not to have evolved the idea that spirits can be satisfied as much by the ‘ghosts’ of things as by the things themselves; for them the spirits were real and the offerings made to them must be real also.

In the case of kings realism was carried to the farthest extent. A pit was dug which might be 60 feet square and over 40 feet deep, with on each side a sloped passage or stairway leading down from ground level. In the pit, and covering the greater part of its area, there was constructed a tomb-chamber of wood finely carved or adorned with designs in polychrome lacquer; in this was laid the body of the king, and in and around it an astonishing wealth of objects, including such things as chariots with their horses, the bodies of attendants, women wearing elaborate head-dresses of turquoise or soldiers with copper helmets; then the pit was filled with earth pounded into a sold mass as was done for house foundations, and in the filling more human victims were also buried, so that the total number might run into two or three hundred.

After this elaborate ritual of burial, which bears in details a remarkable resemblance to the Sumerian ritual of the Early Dynastic period and may, like the use of metal, be due to western influences, there was still need for the regularly current sacrifices which furnished nourishment for the dead and won their favourable response to prayer. The spirits of the ancestors dwelt with and were under the rule of Ti, the great god, and they acted as mediators and intercessors between him and their human descendants; prayers to the ancestors take the form of imploring them to ask god to do this or that.

This mediation would be forthcoming only if the spirits were satisfied by the proper offerings. The character of these can gathered from bone inscriptions. Drink offerings of spirituous liquor seem to have been the only product of the soil that was presented to the dead, or to the gods; of such things as bread or fruit there is no mention-in fact according to a story of the Chou period, when a high official directed in his will that during the first year after his death his favourite delicacy water-chestnuts, should be sacrificed to him, his strait-laced son decided that filial duty must give way to orthodox tradition and refused to carry out so irregular an order.

The normal sacrifices were of men and animals—Cattle, sheep, pigs, dogs, and occasionally horses and birds. The total number of victims sacrificed at a time was usually small, from one to ten; but for an important ceremony might be very large—one hundred cups of liquor, one hundred sheep and three hundred cattle’; and in several inscriptions a hundred and even three hundred human victims are mentioned. The human victims of a tomb sacrifice performed after the actual burial, either as the last act of the ceremony or at a later date, were decapitated and buried in pits, ten to a pit, sometimes with their hands tied behind their backs, furnished each with a uniform outfit of small bronze knives, axe-heads and grinding-stones, and their skulls were buried separately, in small square pits close by. With reference to these victims the bone inscriptions use different words: sometimes ‘men’, sometimes ‘captives’, but most often, and always where large numbers are concerned, ‘Ch’iang’ which, as written, combines the signs for ‘men’ and ‘sheep’ and is said to mean ‘barbarian shepherds of the West’.

All sacrifices other than those in the tombs of the kings were celebrated in temples, in ‘the House of the Spirits’. About the ritual very little is known. The liquor was poured out on the ground as a libation; animals, or special parts of the animals, were generally burnt by fire, but sometimes buried in the earth or thrown into water; the last two methods were employed for offerings to human ancestors, which the burnt offerings, according to the oracle bones, were destined for the gods; but how far this distinction really held good it is impossible to say, and it may even be that for the Shang people the distinction was too vague to be consistently observed.

…there were gods. Some of these were powers of nature or natural features; one oracle bone records ‘a burnt offering of four cattle to the sources of the Haan river’, the river on which the city Shang stood, perhaps an offering made because of drought such as that of c. 1190 BC when the river ceased to flow. The earth was a deity which later, and probably in Shang times also, was symbolized as an earthen mound (‘the Earth of the region’) piled up in the center of each village; possibly this is the ‘Queen Earth’ of after ages. Mention is made of the ‘Dragon Woman’ and of the ‘Eastern Mother’ and the ‘Western Mother’ and of the ‘Ruler of the [Four?] Quarters’; sacrifices are offered to the east, west and south, and to the wind, the ‘King wind’ and ‘the Wind, the Envoy of Ti’. Ti, or Shangti, ‘The Ruler Above’ seems to have been the chief god. He was specially concerned with war, and the king of Shang would not open a campaign without consulting Di; he was asked about the prospects of the year’s crops, he was one of the powers who could assure the sufficient rain, and generally he could allot good or bad fortune to men. War was, perhaps, his peculiar province, but his other attributes were shared by other gods and by the ancestors; at best he ranked as primus inter pares. It has, indeed, been suggested that he was himself but a deified ancestor, the progenitor of all the Shang kings, or that he embodies all the royal ancestry; that is possible, but the argument adduced in support of the theory, namely the fact that certain of the Shang kings bear such names as Ti I and Ti Hsin, could just as well be urged against it, seeing that theophoric names, i.e. names compounded with the name of the god, of the sort common in Sumer and in other lands of the ancient Middle East, imply the recognition of an already existing deity.

Both the gods and the ancestors existed; they had knowledge and they had power, power for good and for evil. The purpose of religion was therefore twofold: to secure by offerings the favour of the gods, so that they might grant to the suppliant not evil but good, and to wrest from the gods the knowledge that would guide his actions in this world. The sacrifices have been described; the knowledge was to be gained by divination.

One method of divination was, probably, by mediums, in Shang as in later days, but naturally no material evidence for that remains. The other method, for which we have evidence in plenty, was the interpretation of the cracks produced by heat in tortoise-shell or in bone. Of the two materials the former seems to have been the original and the most efficacious, for there were frequent references to consulting ‘the tortoise’, of ‘the Great Tortoise’, whereas bone is never mentioned as such. When, in 1395 BC, P’an Keng shifted his capital to Anyang he reminded his discontented subjects, ‘You did not presumptuously oppose the decision of the Tortoise’.

The questions are severely practical. Some deal with sacrifice, to whom it should be made—it was, of course, essential to find out which deity had to be propitiated—and when, and with what kind of offerings. A very common subject is war; the king enquires of the oracle when to declare war, how many men to engage, whether to attack or remain on the defensive, and what prospects there were of booty and prisoners? The crops–the outlook for each kind of grain and for the output of liquor; the weather, not only the general forecast but the immediate–‘Will it rain tonight?’ (and in a few cases we are given not only the official answer ‘No’ but the comment ‘It really didn’t rain!’); illness—will the patient recover?; dreams—does such and such a dream portend good or evil?; and the astrologer’s usual gambit, ‘Will next week be lucky or unlucky?’; and finally, and very often, ‘Will the Powers help?’ ‘Shall I receive aid?’ ‘Will the spirit of Grandfather aid the king?’ Such is the information that man in ancient China desired to obtain from the spirit world, and to obtain it was the whole purpose of religion.


This organization not only provided the social contract, but, as noted above, the notion of private property. Each family required it’s own ancestral tome and sacred hearth. It therefore had it own land, owned not by individuals, but by joint families. Some societies had preserved this organization into modern times. In his book Primitive Property, Lavaleye looks at the village communities of India and Java for a model of how primitive communities arranged their economic relations such as land ownership:

In some remote regions the most archaic form of community is to be found, of which ancient authors make such frequent mention. The land is cultivated in common, and the produce divided among all the inhabitants. At the present time, however, collectivity no longer exists generally, except on the joint-family. This family community still exists almost everywhere, with the same features as the zadruga of the Southern Slavs.

Each family is governed by a patriarch, exercising despotic authority. The village is administered by a chief, sometimes elected, sometimes hereditary. In the villages where the ancient customs have been maintained, the authority belongs to a council, which is regarded as representing the inhabitants. The most necessary trades, such as those of the smith, the currier, the shoemaker, the functions of the priest and the accountant, devolve hereditarily in certain families, who have a portion of the land allotted to them by way of fee…In England, there are numerous traces to show that a custom formerly existed there exactly similar to that practised in India, a remarkable instance of the persistence of certain institutions in spite of time and national migrations.

This intimate association which forms the Hindu village rests even at the present day on family sentiment; for the tradition, or at least the idea, prevails among the inhabitants of descent from a common ancestor: hence arises the very general prohibition against land being sold to a stranger. Although private property is now recognized, the village, in its corporate capacity, still retains a sort of eminent domain. Testamentary disposition was not in use among the Hindus any more than among the Germans or the Celts. In a system of community there was no place for succession or for legacies. When, in later times, individual property was introduced, the transmission of property was regulated by custom.

As Sir H. Maine remarks, in the natural association of the primitive village, economical and juridical relations are much simpler than in the social condition, of which a picture has been preserved to us in the old Roman law and the law of the Twelve Tables Land is neither sold, leased, nor devised. Contracts are almost entirely unknown. The loan of money for interest has not been thought of. Commodities only are the subject of ordinary transaction, and in these the great economic law of supply and demand has little room for action. Competition is unknown, and prices are determined by custom. The rule, universal with us, of selling in the dearest market possible and buying in the cheapest, cannot even be understood. Every village and almost every family is self-sufficient. Produce hardly takes the form of merchandise destined for exchange, except when sent to the sovereign as taxes or rent. Human existence almost resembles that of the vegetable world, it is so simple and regular.

In the dessa of Java, and in the Russian mir, we can grasp, in living form, civilization in its earliest stage, when the agricultural system takes the place of the nomadic and pastoral system. The Hindu village has already abandoned community, but it still retains numerous traces of it. In its relations with the state, the village is regarded as a jointly responsible corporation. The state looks to this corporation for the assessment and levying of imposts, and not to the individual contributor…The village owns the forest and uncultivated land, as undivided property, in which all the inhabitants have a right of enjoyment. As a rule, the arable land is no longer common property, as in Java or in Germany in the days of Tacitus. The lots belong to the families in private ownership, but they have to be cultivated according to certain traditional rules which are binding on all.

It appears that cultures like Bali, Java, India, China and the Graeco-Roman world had two distinct religions. The older one was the veneration of one’s ancestors centered around the domestic temple or hearth, and based on the ongoing maintenance of the relationship with the dead–the ceremonial offerings; the funerary repasts; the sacrificial rites; burial practices; and so on. The other was a broader public worship based in temples and mediated by a professional class of priests, of a pantheon of Major Deities connected to nature or the stars. It was this latter worship, de Coulanges attests, that allowed the ancient city-states to form.

We are correct, therefore, in saying that this second religion was at first in unison with the social condition of men. It was cradled in each family, and remained long bounded by this narrow horizon. But it lent itself more easily than the worship of the dead to the future progress of human association. Indeed, the ancestors, heroes, and manes were gods who by their very nature could be adored only by a very small number of men, and who thus established a perpetual and impassable line of demarcation between families.

The religion of the gods of nature was more comprehensive. No rigorous laws opposed the propagation of the worship of any of these gods. There was nothing in their nature that required them to be adored by one family only, and to repel the stranger. Finally, men must have come insensibly to perceive that the Jupiter of one family was really the same being or the same conception as the Jupiter of another, which they would never believe of two Lares, two ancestors, or two sacred fires.

Let us add, that the morality of this new religion was different. It was not confined to teaching men family duties. Jupiter was the god of hospitality; in his name came strangers, suppliants, “the venerable poor,” those who were to be treated “as brothers.” All these gods often assumed the human form, and appeared among mortals; sometimes, indeed, to assist in their struggles and to take part in their combats; often, also, to enjoin concord, and to teach them to help each other.

As this second religion continued to develop, society must have enlarged. Now, it is quite evident that this religion, feeble at first, afterwards assumed large proportions. In the beginning it was, so to speak, sheltered under the protection of its elder sister, near the domestic hearth. There the god had obtained a small place, a narrow cella, near and opposite to the venerated altar, in order that a little of the respect which men had for the sacred fire might be shared by him. Little by little, the god, gaining more authority over the soul, renounced this sort of guardianship, and left the domestic hearth. He had a dwelling of his own, and his own sacrifices. This dwelling (ναος, from virago, to inhabit) was, moreover, built after the fashion of the ancient sanctuary; it was, as before, a cella opposite the hearth; but the cella was enlarged and embellished, and became a temple. The holy fire remained at the entrance of the god’s house, but appeared very small by the side of this house. What had at first been the principal, had now become only an accessory. It ceased to be a god, and descended to the rank of the god’s altar, an instrument for the sacrifice. Its office was to burn the flesh of the victim, and to carry the offering with men’s prayers to the majestic divinity whose statue resided in the temple.

When we see these temples rise and open their doors to the multitude of worshipers, we may be assured that human associations have become enlarged… pp. 103-104

Why so many gods? I found this article talking about Hinduism–the largest living polytheistic religion–to give a good explanation. Even the spirit world apparently requires bureaucracy and middle management!:

…For a country, state, or city to run properly, the government creates various departments and employs individuals within those departments — teachers, postal workers, police and military personnel, construction works, doctors, politicians, and so many more. Each of these departments employs hundreds or thousands of individuals carrying out their respective duties and each sector has an individual or multiple individuals that oversees the activities of that one unit. Each head of an area is endowed with certain privileges and powers which facilitates them in their tasks. It’s safe to say that the number of individuals working for the United States government goes into the millions. This is just to keep one country working. Multiply that by all the countries on the planet, which is around 200, and all the people working for these governments, the total would easily come out to tens of millions of people employed by the various governments of the world to run one planet.

The way it’s explained is that in order to keep the universe running, Krishna, the supreme being, has put into place individuals that oversee different parts of the material universe. These individuals are powerful beings that have been appointed by Krishna and have been bestowed with the necessary powers and abilities to manage and govern their area of creation. They can be referred to as demigods. For example, there is someone responsible for the sun and his name is Surya. The goddess Saraswati is the overseer of knowledge. The creator of the material universe is known as Brahma. The destruction of the universe is overseen by Shiva and Vishnu serves as the maintainer. There are individuals overseeing the oceans, the wind, and practically every facet of creations. When seen from this perspective, 33 million is not that big a number.

The 33 Million Gods of Hinduism (Huffington Post)

Because the pantheon of gods was not associated with a specific family, unlike the ancestral deities or protector spirits, worship was open to all. This allowed larger associations to form.

de Coulanges goes on to describe how each city had its own patron god or goddess who watched over and protected their city. In this way, they were quite similar to Babylonian cities, which were also based around the worship of a particular tutelary deity (Marduk with Babylon, Ashur with Assur, Enlil with Nippur, Ishtar with Arbela, etc.). The relationship of the citizens of the polis was the same as that of the corporate family writ large. The sacred worship of the ancestors was transferred to the city’s patron god/goddess. The demos was a kind of congregation, united in worship. It is only in this context that the institution of the ancient city can be fundamentally understood, argued de Coulanges.

As Michael Hudson has argued, cities themselves were established from earlier sacred sites which date back to prehsitoric meeting places of sacred congregation and feasting. For example, it has recently been discovered that Stonehenge was a site of ritual feasting for inhabitants from the distant corners of the British Isles. As Hudson writes, “The earliest urban sites were sanctified, commercial, peaceful, and often multiethnic.”

The multiethnic character of southern Mesopotamian cities (and others as well) led them to formalize rituals of social integration to create a synthetic affinity. Urban cults were structured to resemble the family ‑‑ a public family or corporate body with its own foundation story such as that of Abraham of Ur for the Jews, or heroic myths for Greek cities. Over these families stood the temples, “households of the gods,” whose patron deities were manifestations of a common prototype and given local genealogies.

Assyriologists have noted that early Mesopotamian rulers downplayed their family identity by representing their lineage as deriving from the city‑temple deities. Sargon of Akkad, often taken as a prototype for the myth of the birth of royal heroes (including Moses and Romulus) emphasized his “public family.” In any event archaic clan groupings seem to have been relatively open to newcomers. There is little Bronze Age evidence for closed aristocracies of the sort found in classical antiquity. Mesopotamia seems to have remained open and ethnically mixed for thousands of years, and the Sumerians probably incorporated strangers as freely as did medieval Irish feins and many modern tribal communities…

Even as cities became more secular in classical times, their administrative focus remained shaped to a large extent by sacred rituals. Town planners were augurs, more concerned with reading omens than with the more pragmatic aspects of city planning. In an epoch when medicine was ritualistic and doctors often were in the character of shamans, the idea of promoting health was to perform proper rituals at the city’s foundation rather than to place cities on slopes for good drainage. (This is why it was considered auspicious to build Rome around the mosquito‑ridden Forum.) Material considerations were incorporated to the extent that they could be reconciled with the guiding social cosmology.

Many millennia were required before a common body of law came to govern the city and the land, temples and palaces in a single code. Polis-type cities and their law codes combining hitherto separate public and private, sacred and secular functions were relatively late. And when such cities arose, in classical times, they had become much more genetically closed than was the case in archaic towns.

However, the citizens of the Polis were still simultaneous members of multiple, overlapping sodalities—clans, tribes, phratries, neighborhoods, genē, and so on. Yet each association was based around religion. Some associations were by birth and others were by choice. At different points in their lives, people became members of these multiple overlapping social associations and cults:

From the tribe men passed to the city; but the tribe was no dissolved on that account, and each of them continued to form a body, very much as if the city had not existed. In religion there subsisted a multitude of subordinate worships, above which was established one common to all; in politics, numerous little governments continued to act, while above them a common government was founded…

Thus the city was not an assemblage of individuals; it was a confederation of several groups, which were established before it, and which it permitted to remain. We see, in the Athenian orators, that every Athenian formed a portion of four distinct societies at the same time; he was a member of of a family, of a phratry, or a tribe, and of a city. He did not enter at the same time and the same day into all these four, like a Frenchman, who at the moment of this birth belongs at once to a family, a commune, a department, and a country. The phratry and the tribe are not administrative divisions. A man enters at different times into these four societies, and ascends, so to speak, from one to the other. First, the child is admitted into the family be the religious ceremony, which takes place six days after his birth. Some years later he enters the phratry by a new ceremony, which we have already described. Finally, at the age of sixteen or eighteen, he is presented for admission into the city.

On that day, in the presence of an altar, and before the smoking flesh of a victim, he pronounces an oath, by which he binds himself, among other things, always to respect the religion of the city. From that day he is initiated into the public worship, and becomes a citizen. If we observe this young Athenian rising, step by step, from worship to worship, we have a symbol of degrees through which human association has passed. The course which this young man is constrained to follow is that which society first followed. Ancient City: p. 104-106

It was this worship mediated by priests and based in temples that allowed for greater levels of social complexity than tribal groupings. Everywhere where an organized, professional bureaucratic priesthood emerged we see a scaling up of social complexity and the emergence of permanent status hierarchies. Certain families are ranked higher than others, either by an ability to mediate with transcendent deities or through descent from a particularly prestigious ancestor. Often the head of this lineage becomes the first de facto ruler. And there is always a connection between the priesthood and the political ruing class. Sometimes they are one in the same, as in a theocracy. Other times they are ideological allies, with the secular authority in the driver’s seat (called Caesaropapism). Since the priests are mediators between men and the gods, their services are essential—not to mention expensive. We’ve previously shown that the donations to the priestly class (as described in Leviticus, for example), were the origin of taxation. And the need to assess these donations against one another was the impetus for the development of money, originating as a system of measurement. Thus, primitive general-purpose money was always and everywhere associated with priests, kings, and temples.

The Egyptologist John Henry wrote an account of the Egyptian religion, and how it changed over time necessitating the development of money as a priestly cult, centered around astrology and the gods formed:

Tribal societies practised magic in which the community exercised a collective relationship with their deceased ancestors who were believed to inhabit a spirit world that was part of nature. The deceased were to continue to fulfill their social obligations by communicating tribal commands to those forces of nature which could not be understood by per-scientific populations.

Totemism differs from mature religion in that no prayers are used, only commands. The worshipers impose their will on the totem by the compelling force of magic, and this principle of collective compulsion corresponds to a state of society in which the community is supreme over each and all of its members … the more advanced forms of worship, characteristic of what we call religion, presupposed surplus production, which makes it possible for the few to live off the labour of the many.

The king had been chosen and approved by the gods and after his death he retired into their company. Contact with the gods, achieved through ritual, was his prerogative, although for practical purposes the more mundane elements were delegated to priests. For the people of Egypt, their king was a guarantor of the continued orderly running of their world: the regular change of seasons, the return of the annual inundation of the Nile, and the predictable movements of the heavenly bodies, but also safety from the threatening force of nature as well as enemies outside Egypt’s borders.

Signifying the new state of affairs was the temple which was not only ‘…an architectural expression of royal power, it was for them a model of the cosmos in miniature’. And, while the pharaohs were careful not to supplant the clan (magic) cults with the new centralized religion (until the ill-fated experiment of Akhetaten, that is), the pharaoh became ‘…theoretically, the chief priest of every cult in the land’.

The state religion was structured around Re and Osiris, emphasizing continual renewal in a never-ending cycle of repetition. The ideological thrust was one of permanence and long-standing tradition. This, even as change took place and fundamental political innovations were introduced, ‘…(the) tendency for Egyptian kings (was) not to emphasize what innovations they were instituting, but rather to stress how they were following long traditions…’

Essentially, the spirit world was converted to one of gods, and the control of nature, previously seen as a generally sympathetic force, was now in the hands of priests. Nature itself became hostile and its forces, controlled by gods, required pacification through offerings. The king–the “one true priest”–and the priests placed themselves as the central unifying force around which continued economic success depended. In so doing, they could maintain the flow of resources that provided their enormously high levels of conspicuous consumption and wasteful expenditures that certified their status as envoys to the natural world.

Under the new social organization, tribal obligations were converted into levies (or taxes, if one views this term broadly enough). The economic unit taxed was not the individual but the village… Wray, et. al.; Credit and State Theories of Money: pp. 89-91

It’s also interesting that these ancestral death cults did not contain any kind of moral code, which became so central to later religions, including the ones most people follow today. They also had no creeds or dogmas. Here isEvans Pritchard, again:

To understand … primitive religion in general, …we have to note that he held that early religions lacked creeds and dogmas: ‘they consisted entirely of institutions and practices.’ Rites, it is true, were connected with myths, but myths do not, for us, explain rites; rather the rites explain the myths. If this is so, then we must seek for an understanding of primitive religion in its ritual, and, since the basic rite in ancient religion is that of sacrifice, we must seek for it in the sacrificium; and further, since sacrifice is so general an institution, we must look for its origin in general causes.

Fundamentally, Fustel de Coulanges and Robertson Smith were putting forward what might be called a structural theory of the genesis of religion, that it arises out of the very nature of primitive society. This was also Durkheim’s approach, and he proposed to show in addition manner in which religion was generated. The position of Durkheim…can only be appraised if two points are kept in mind.

The first is that for him religion is a social, that is an objective, fact. For theories which tried to explain it in terms of individual psychology he expressed contempt. How, he asked, if religion originated in a mere mistake, an illusion, a kind of hallucination, could it have been so universal and so enduring, and how could a vain fantasy have produced law, science, and morals?

And that is exactly the fundamental question I have.

Shifting the Overton Window

“All truth passes through three stages: First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as self-evident.”
–Arthur Schopenhauer

When I started this blog way way back in 2011 (whoa, has it been that long‽), one of the first things I wrote about was a series of lengthy posts about automation: What Are People Good For? I returned to that topic frequently over the years, although not so much lately. For instance, here’s an oldie from 2011: Job Myths & Realities. Here’s another: The New York Times Discovers the Jobless Future.

So, here in 2019, it’s surreal to see everything I said back then going mainstream. At least that’s what I think when I listen to independent presidential candidate Andrew Yang, who has been saying pretty much everything I said back then.

Here’s one from 2016: Automation and the Future of Work: It’s Already Happened. I discussed the effect on the African-American community with Automation and the Future of Work: Black Lives Matter

Now, I don’t agree with everything he’s saying about solutions. I have certain problems with UBI, and I have different ideas about the best solutions, but that’s a topic for another post. However, it is nice to hear someone talking about these problems rather than the usual “let them eat training.” Until now, elites have stubbornly stuck to the idea that the deindustrialization worked out great for everyone, and resisted any idea that vast swaths of America have been reduced to third-world living standards outside of a handful of elite citadels and gated suburbs. The anger in the Midwest and the Heartland came as a shock to the Neoliberal cloistered class when that anger led to throwing a monkey wrench into the gears of collective governance by knowingly electing an incompetent proto-Facist grifter for president. “How could this happen?” the elites wondered? They must just all be racists.

I’ve been listening to an old interview between Yang and Ezra Klein. Klein is the poster child for the kind of coastal-dwelling hyperprivileged credential-class elite that lives in a permanent bubble. I get the feeling he’s never even been to flyover country, and would probably be more at home in downtown Kuala Lumpur than he would be in my location in Milwaukee. I’m sure Midwesterners would be as exotic to him as the headhunting highlanders of New Guinea. In the interview with Yang, he rolls out every trope in the book to deny that there’s any sort of problem with jobs or automation, including the hoary old, “we all used to be farmers, and now look at us!” trope. Get better journalists.
Last Week Tonight with John Oliver also did a terrible job, rolling out the typical lazy-thinking and specious arguments against the impact of automation and deindustrialization. Sometimes YouTube comments are intelligent:

For a much better discussion, here’s a clip from the Sam Harris interview (YouTube)

I also wrote way back then about how meritocracy is a sham: Thoughts on Meritocracy. People may have thought I was harsh or talking out of my ass back then, but with the recent college admissions scandal (“Varsity Blues”), I don’t think people are as sold on the idea of meritocracy anymore. Once again, the emperor has no clothes. I think the reason that this incident worried the powers that be is that it strikes directly at the myths that are used to justify the obscene inequalities we see today.

The other “outside” topic I’ve written about over the years has been what’s often referred to as Modern Monetary Theory, or Functional Finance. Well, that used to be way out there. But now it’s mainstream enough now to engender attacks from the press. One was via Paul Krugman at the New York Times, and another from the Socialist magazine Jacobin.

Economists who have developed the MMT paradigm, especially Stephanie Kelton, Randall Wray and Pavlina Tchernva, have responded vigorously. One again, wherever you fall on this topic, I think we can agree that this debate is finally . Tcherneva responded to the attack on MMT by Doug Henwood with a piece of her own for Jacobin: MMT Is Already Helping. Incidentally, much of my writing on economic history has been informed and inspired directly by their publications.

I can’t keep track of all of these, but interfluidity has a good roundup of the MMT Wars: MMT streetfighting

Three levels of controversy over MMT (interfluidity)

Bill Black: MMT Takes Center Stage – and Orthodox Economists Freak (Naked Capitalism)

MMT is Politically Open and Applicable to Both Capitalism and Socialism (Heteconomist)

What’s wrong with MMT? (Medium)

Another fictional characterisation of MMT finishes in total confusion (Billyblog)

I think it’s pretty clear that we’ve tentatively moved into the “violent opposition” phase. And that’s the best news I’ve heard in a while. I don’t often toot my own horn or pat myself on the back (it’s not in my nature), but I hope you’ll permit me a modicum of self-congratulation that the topics that this little blog have dealt with over the years are finally being discussed in mainstream media venues.

EDIT: More good news: apparently Chicago elected six Democratic Socialists to their city council (The Guardian). Kind of ironic that they’re pulling ahead of us here in Milwaukee, where we were run by Socialist Party mayors until 1960.

Fun Facts March 2019

Sodium Citrate is the secret ingredient to making nacho cheese sauce. Coincidentally, Sodium Citrate’s chemical formula is Na3C6H5O7 (NaCHO)
Cook’s Illustrated Explains: Sodium Citrate (Cook’s Illustrated)

According to the FBI there are 300 times more impostor Navy SEALs than actual SEALs
Don Shipley (Navy SEAL) (Wikipedia)

You were more likely to get a job if you had smallpox scars in the 18th century. The scars proved that you already had smallpox and could not pass it on to your employers.
(Reddit)

1,500 private flew into Davos in 2019
1,500 private jets coming to Davos (BoingBoing)

According to US Customs and Border Protection, border crossings of Mexican and Central American refugees ranged from 20,000 to roughly 60,000 people per month in 2018. In Los Algodones [Mexico] alone, nearly five times as many American dental refugees are going the opposite way. To get an idea of the absurdity, one could argue there are more people currently fleeing the US’s health care system than refugees seeking asylum from extreme violence and state terror in Central America.
Millions of Americans Flood Into Mexico for Health Care — the Human Caravan You Haven’t Heard About (Truthout) similarly:

The U.S. government estimates that close to 1 million people in California alone cross to Mexico annually for health care, including to buy prescription drugs. And between 150,000 and 320,000 Americans list health care as a reason for traveling abroad each year. Cost savings is the most commonly cited reason.
American Travelers Seek Cheaper Prescription Drugs In Mexico And Beyond (NPR). Who’s the Third World country now???

Virginia students learn in trailers while state offers Amazon huge tax breaks (The Guardian)

The term “litterbug” was popularized by Keep America Beautiful, which was created by “beer, beer cans, bottles, soft drinks, candy, cigarettes” manufacturers to shift public debate away from radical legislation to control the amount of waste these companies were (and still are) putting out.
A Beautiful (If Evil) Strategy (Plastic Pollution Coalition)

Americans Got 26.3 Billion Robocalls Last Year, Up 46 Percent From 2017.
https://www.washingtonpost.com/technology/2019/01/29/report-americans-got-billion-robocalls-last-year-up-percent/

Over the past 20 years, more than $7 billion in public money has gone toward financing the construction and renovation of NFL football stadiums.
Why do taxpayers pay billions for football stadiums? (Vox)

San Francisco has more drug addicts than it has students enrolled in its public high schools.
https://marginalrevolution.com/marginalrevolution/2019/02/san-francisco-fact-of-the-day.html

By 2025, deaths from illicit opioid abuse are expected to skyrocket by 147%, up from 2015. Between 2015 and 2025, around 700,000 people are projected to die from an opioid overdose, and 80% of these will be caused by illicit opioids such as heroin and fentanyl. (in other words, everything is going according to plan)
https://www.upi.com/Health_News/2019/02/01/Study-Illicit-opioid-deaths-to-rise-by-147-percent-by-2025/3961549026251/

35% of the decline in fertility between 2007 and 2016 can be explained by declines in births that were likely unintended, and that this is driven by drops in births to young women.
https://www.nber.org/papers/w25521

In 1853, not many Americans worked in an office. Even as late as the 1880s, fewer than 5 percent of Americans were involved in clerical work.
The Open Office and the Spirit of Capitalism (American Affairs)

About 40% of young adults cannot afford to buy one of the cheapest homes in their area in the UK, with the average deposit now standing at about £26,000
Young people living in vans, tiny homes and containers (BBC)

Terror attacks by Muslims receive an average of 357 percent more media coverage than those by other groups. (Newsweek). Maybe the New Zealand mosque shooting will change that.

One-third of the billions of dollars [GoFundMe] has raised since its inception went toward somebody’s medical expenses.
US Healthcare Disgrace: GoFundMe-Care Symptomatic of Extreme Inequality (Who. What. Why)

40% of police officer families experience domestic violence, in contrast to 10% of families in the general population.
http://womenandpolicing.com/violencefs.asp

After water, concrete is the most widely used substance on Earth. If the cement industry were a country, it would be the third largest carbon dioxide emitter in the world with up to 2.8bn tonnes, surpassed only by China and the US.
Concrete: the most destructive material on Earth (The Guardian)

Rural areas have not even recovered the jobs they lost in the recession….Suicide rates are on the rise across the nation but nowhere more so than in rural counties.
Two-Thirds of Rural Counties Have Fewer Jobs Today Than in 2007 (Daily Yonder)

Mapping the rising tide of suicide across the United States (Washington Post). According to plan…

On any given day, 37 percent of American adults eat fast food. For those between 20 and 39 years old, the number goes up to 45 percent – meaning that almost half of younger adults are eating fast food daily.
4 troubling ways fast food has changed in 30 years (Treehugger)

Global investors dumped $4.2 billion into companies working on self-driving cars (or autonomous vehicles, AVs) in the first 3 quarters of 2018.
In Praise of Dumb Transportation (Treehugger)

In the early Middle Ages, nearly one out of every thousand people in the world lived in Angkor, the sprawling capital of the Khmer Empire in present-day Cambodia.
The city of Angkor died a slow death (Ars Technica)

Neanderthals are depicted as degenerate and slouching because the first Neanderthal skeleton found happened to be arthritic.
20 Things You didn’t Know About Neanderthals (Discover)

There were more than twice as many suicides (44,193) in the US in 2018 as there were homicides (17,793)
College Dreams Dashed (Psychology Today)

Adolescents are more likely to feel depressed and self-harm, and are less likely to get a full night’s sleep, than 10 years ago.
Adolescent health: Teens ‘more depressed and sleeping less’ (BBC)

When his eight years as President of the United States ended on January 20, 1953, private citizen Harry Truman took the train home to Independence, Missouri, mingling with other passengers along the way. He had no secret service protection. His only income was an Army pension. (Reddit)

Khoisan people of South Africa were once the most populous humans on Earth. (Ancient Origins)

[T]he contribution of top firms to US productivity growth has dropped by over 40 percent since 2000. [If] in the 1960s you were to double the productivity of GM, that would clearly have a huge impact on the economy. If you were to double the productivity of Facebook overnight, it wouldn’t even move the needle – you would get slightly better targeted ads, but zero impact on the economy.
The “Biggest Puzzle in Economics”: Why the “Superstar Economy” Lacks Any Actual Superstars (ProMarket)

Almost half of new cancer patients lose their entire life savings. (Insider)

The son of a US Governor is 6,000 times more likely to become a Governor than the average American and the son of a US Senator is 8,500 times more likely to become a senator than the average American. (Reddit)

From 1987 until 2011-12—the most recent academic year for which comparable figures are available—universities and colleges collectively added 517,636 administrators and professional employees…

Part-time faculty and teaching assistants now account for half of instructional staffs at colleges and universities, up from one-third in 1987. During the same period, the number of administrators and professional staff has more than doubled. That’s a rate of increase more than twice as fast as the growth in the number of students.
New Analysis Shows Problematic Boom In Higher Ed Administrators (Huffington Post)

From 2009 to 2017, major depression among 20- to 21-year-olds more than doubled, rising from 7 percent to 15 percent. Depression surged 69 percent among 16- to 17-year-olds. Serious psychological distress, which includes feelings of anxiety and hopelessness, jumped 71 percent among 18- to 25-year-olds from 2008 to 2017. Twice as many 22- to 23-year-olds attempted suicide in 2017 compared with 2008, and 55 percent more had suicidal thoughts. The increases were more pronounced among girls and young women. By 2017, one out of five 12- to 17-year-old girls had experienced major depression in the previous year.
The mental health crisis among America’s youth is real – and staggering (The Conversation)

Infectious diseases that ravaged populations in the Middle Ages are resurging in California and around the country, especially in homeless encampments.
“Medieval” Diseases Flare as Unsanitary Living Conditions Proliferate (Truthout) Who’s the Third World Country? Repeat after me, “according to plan…”

Benjamin Franklin chose never to patent any of his inventions or register any copyright (SmallBusiness.com)

I think it’s time to get the hell out of here:

Rhapsody on Blue

A few years ago, a photograph went “viral” on the internet. It was just a simple picture of a dress. What was so compelling about it?


Well, what was so incredible about this particular photo was that nobody could agree about what color it was. Some people said it was white with gold stripes. Others insisted, just as firmly, that it was blue with black stripes (which is what I saw). As the BBC reported, even Kim and Kanye couldn’t agree, but decided to stay together for the sake of the money and the fame.

Why everyone is asking: What colour is this dress?’ (BBC)

White & Gold or Blue & Black? Science of the Mystery Dress (Live Science)

Relevant xkcd: https://xkcd.com/1492/

This brings to mind an old adage I head a long time ago: “You don’t see with your eyes. You see with your brain with the help of your eyes.”

And that simple, yet profound, distinction makes all the difference. Once you grasp that, a lot of these ideas begin falling into place.

For another example somewhat more pertinent to our discussion of auditory hallucinations, a sound clip went viral in much the same way. When the clip was played, some people heard the name “Laurel”. Others insisted that what the clip really said was “Yanny”. As one researcher said of these illusions, “All of this goes to highlight just how much the brain is an active interpreter of sensory input, and thus that the external world is less objective than we like to believe.”

‘Yanny’ or ‘Laurel’? Why Your Brain Hears One or the Other in This Maddening Illusion (Live Science)

Of course, the ultimate reason for the illusion was exactly the same: You don’t hear with your ears. You hear with your brain with the help of your ears.

Now, you need to keep this in mind with the discussion we’re about to have.

We’ve talked previously about how metaphor, analogy, language, and culture shape our perceptions of the world around us. It turns out that numerous studies have confirmed that the classification schemes, metaphors, models, and language that we use colors our perception of the so-called “objective” world. And ‘colors’ turns out to be an apt word.

For example, many cultures around the world do not make a distinction between the colors blue and green. That is, they don’t actually have a for ‘blue’; rather blue and green are classified as different shades of the same color. In fact, 68 languages use green-or-blue (grue) words compared to only 30 languages that use distinct words for green and blue.This does not mean that people in these cultures literally cannot ‘see’ the color blue, as if they perceived it as another color, or as somehow invisible (color perception is created by light wavelengths striking cone cells on the retina). Rather, they simply felt that no special distinction needed to be made between these colors in the language.

It turns out that this actually affects how such cultures perceive the world around them. The Himba (whom we mentioned previously) also do not make a distinction. When given a task of identifying which shades of blue and green were different, they were slower than cultures which do make such a distinction. By contrast, they do differentiate multiple shades of green, and were able to identify a different shade of green faster than people in cultures who do not make such a distinction (such as ours).

…there’s actually evidence that, until modern times, humans didn’t actually see the colour blue…the evidence dates all the way back to the 1800s. That’s when scholar William Gladstone – who later went on to be the Prime Minister of Great Britain – noticed that, in the Odyssey, Homer describes the ocean as “wine-dark” and other strange hues, but he never uses the word ‘blue’.

A few years later, a philologist (someone who studies language and words) called Lazarus Geiger decided to follow up on this observation, and analysed ancient Icelandic, Hindu, Chinese, Arabic, and Hebrew texts to see if they used the colour. He found no mention of the word blue.

When you think about it, it’s not that crazy. Other than the sky, there isn’t really much in nature that is inherently a vibrant blue.

In fact, the first society to have a word for the colour blue was the Egyptians, the only culture that could produce blue dyes. From then, it seems that awareness of the colour spread throughout the modern world…Another study by MIT scientists in 2007 showed that native Russian speakers, who don’t have one single word for blue, but instead have a word for light blue (goluboy) [голубой] and dark blue (siniy) [синий], can discriminate between light and dark shades of blue much faster than English speakers.

This all suggests that, until they had a word from it, it’s likely that our ancestors didn’t actually see blue. Or, more accurately, they probably saw it as we do now, but they never really noticed it…

There’s Evidence Humans Didn’t Actually See Blue Until Modern Times (Science Alert – note the title is misleading)

In fact, the way color is described throughout the Iliad is distinctly odd, a fact that scholars have long noted:

Homer’s descriptions of color in The Iliad and The Odyssey, taken literally, paint an almost psychedelic landscape: in addition to the sea, sheep were also the color of wine; honey was green, as were the fear-filled faces of men; and the sky is often described as bronze.

It gets stranger. Not only was Homer’s palette limited to only five colors (metallics, black, white, yellow-green, and red), but a prominent philosopher even centuries later, Empedocles, believed that all color was limited to four categories: white/light, dark/black, red, and yellow. Xenophanes, another philosopher, described the rainbow as having but three bands of color: porphyra (dark purple), khloros, and erythros (red).

The Wine-Dark Sea: Color and Perception in the Ancient World (Clarkesworld Magazine)

Perhaps the blind poet was, indeed, tripping. But the ancient Greeks were hardly alone in their unusual description of colors:

The conspicuous absence of blue is not limited to the Greeks. The color “blue” appears not once in the New Testament, and its appearance in the Torah is questioned (there are two words argued to be types of blue, sappir and tekeleth, but the latter appears to be arguably purple, and neither color is used, for instance, to describe the sky). Ancient Japanese used the same word for blue and green (青 Ao), and even modern Japanese describes, for instance, thriving trees as being “very blue,” retaining this artifact (青々とした: meaning “lush” or “abundant”).

It turns out that the appearance of color in ancient texts, while also reasonably paralleling the frequency of colors that can be found in nature (blue and purple are very rare, red is quite frequent, and greens and browns are everywhere), tends to happen in the same sequence regardless of civilization: red : ochre : green : violet : yellow—and eventually, at least with the Egyptians and Byzantines, blue.

The Wine-Dark Sea: Color and Perception in the Ancient World (Clarkesworld Magazine)

Of course, biology has a role to play here too. If someone is red/green color blind, which about 1 in 10 men are, they will make no differentiation between red and green. Nor will they be able to adequately describe what they are seeing to those of us who are not color-blind.

I always remember a discussion I had many years ago with a friend of mine who was color-blind (the one who drowned, incidentally). I asked him if he saw red and green as both red or both green. Here’s what he told me: “They’re the same.”

Me:‘The same’ as in they’re both red, or ‘the same’ as in they’re both green?”

Him: Neither. They’re just the same.

Me: So…they’re both gray then? No color at all.

Him: No, it’s not gray. It’s a color.

Me: Okay, which color? Red or green?

Him: Neither.

Me: How can it be neither? It has to be a color. Which color is it, red or green? Or some other color?

Him: I don’t know. they’re just…the same.

And on and on we went…

The Radiolab podcast did a whole episode on the topic which is worth a listen: Why the sky isn’t blue (Radiolab)

And a video explanation: The Invention Of Blue (YouTube)

The World Atlas of Language Structures Online has an entire entry devoted to terms for Green and Blue that is worth reading. https://wals.info/chapter/134

This post: Blue on Blue goes into this topic in exhaustive detail.

Perception is as much cognition as sensation. Colors don’t exist in the world. It is our brain’s way of processing light waves detected by the eyes. Someone unable to see from birth will never be able to see normal colors, even if they gain sight as an adult. The brain has to learn how to see the world and that is a process that primarily happens in infancy and childhood.

Radical questions follow from this insight. Do we experience blue, forgiveness, individuality, etc. before our culture has the language for it? And, conversely, does the language we use and how we use it indicate our actual experience? Or does it filter and shape it? Did the ancients lack not only perceived blueness but also individuated/interiorized consciousness and artistic perspective because they had no way of communicating and expressing it? If they possessed such things as their human birthright, why did they not communicate them in their texts and show them in their art?

This isn’t just about color. There is something extremely bizarre going on, according to what we moderns assume to the case about the human mind and perception.

Blue on Blue (Benjamin David Steele – a lot of material on Jaynes’s ideas here)

Another example is the fact that some cultures don’t have words of the type of relative directions that we have (left, right, etc.). Instead, they only have the cardinal directions—north, south, east, and west. This “exocentric orientation” gives them an almost superhuman sense of direction and orientation compared to people in Industrialized cultures:

In order to speak a language like Guugu Yimithirr, you need to know where the cardinal directions are at each and every moment of your waking life. You need to have a compass in your mind that operates all the time, day and night, without lunch breaks or weekends off, since otherwise you would not be able to impart the most basic information or understand what people around you are saying.

Indeed, speakers of geographic languages seem to have an almost-superhuman sense of orientation. Regardless of visibility conditions, regardless of whether they are in thick forest or on an open plain, whether outside or indoors or even in caves, whether stationary or moving, they have a spot-on sense of direction. They don’t look at the sun and pause for a moment of calculation before they say, “There’s an ant just north of your foot.” They simply feel where north, south, west and east are, just as people with perfect pitch feel what each note is without having to calculate intervals.

There is a wealth of stories about what to us may seem like incredible feats of orientation but for speakers of geographic languages are just a matter of course. One report relates how a speaker of Tzeltal from southern Mexico was blindfolded and spun around more than 20 times in a darkened house. Still blindfolded and dizzy, he pointed without hesitation at the geographic directions.

Does Your Language Shape How You Think? (New York Times)

The reference to perfect pitch is interesting, since it’s more likely for speakers of tonal languages (say, Mandarin Chinese or Vietnamese) to have perfect pitch than people who do not speak a tonal language (such as English). Another common feature of many languages is that statements, by their very syntactic structure, establish whether the speaker knows something for sure, or is making an extrapolation. For example:

…some languages, like Matsés in Peru, oblige their speakers, like the finickiest of lawyers, to specify exactly how they came to know about the facts they are reporting. You cannot simply say, as in English, “An animal passed here.” You have to specify, using a different verbal form, whether this was directly experienced (you saw the animal passing), inferred (you saw footprints), conjectured (animals generally pass there that time of day), hearsay or such. If a statement is reported with the incorrect “evidentiality,” it is considered a lie.

So if, for instance, you ask a Matsés man how many wives he has, unless he can actually see his wives at that very moment, he would have to answer in the past tense and would say something like “There were two last time I checked.” After all, given that the wives are not present, he cannot be absolutely certain that one of them hasn’t died or run off with another man since he last saw them, even if this was only five minutes ago. So he cannot report it as a certain fact in the present tense. Does the need to think constantly about epistemology in such a careful and sophisticated manner inform the speakers’ outlook on life or their sense of truth and causation?

Does Your Language Shape How You Think? (New York Times)

The Pirahã of the Brazilian Amazon have a number of these linguistic anomalies, as reported by Daniel Everett. Most famously, they do not use recursion in their language. They have essentially no numbering system—their only numbers are, one, two, and many. Nouns have no plural form. They have no simple categorical words for colors, rather they describe color in terms of various things in their environment, somewhat reminiscent of Homer’s graphic descriptions above:

I next noticed…that the Pirahãs had no simple color words, that is, no terms for color that were not composed of other words. I had originally simply accepted Steve Sheldon’s analysis that there were color terms in Pirahã. Sheldon’s list of colors consisted of the terms for black, white, red (also referring to yellow), and green (also referring to blue).

However, these were not simple words, as it turned out. They were phrases. More accurate translations of the Pirahã words showed them to mean: “blood is dirty” for black; “it sees” or “it is transparent” for white; “it is blood” for red; and “it is temporarily being immature” for green.

I believe that color terms share at least one property with numbers. Numbers are generalizations that group entities into sets that share general arithmetical properties, rather than object-particular, immediate properties. Likewise, as numerous studies by psychologists, linguists, and philosophers have demonstrated, color terms are unlike other adjectives or other words because they involve special generalizations that put artificial boundaries in the spectrum of visible light.

This doesn’t mean that the Pirahãs cannot perceive colors or refer to them. They perceive the colors around them like any of us. But they don’t codify their color experiences with single worlds that are inflexibly used to generalize color experiences. They use phrases.

“Don’t Sleep There Are Snakes” by Daniel Everett, p. 119

They also do not have any relative directions like ‘left’ and ‘right’; only absolute ones, much like Australian groups. In their culture, everything is oriented relative to the river beside which they live:

During the rest of our hunt, I noticed that directions were given either in terms of the river (upriver, downriver, to the river) or the jungle (into the jungle). The Pirahãs knew where the river was (I couldn’t tell-I was thoroughly disoriented). They all seemed to orient themselves to their geography rather than to their bodies, as we do when we use left hand and right hand for directions.

I didn’t understand this. I had never found the words for left hand and right hand. The discovery of the Pirahãs’ use of the river in giving directions did explain, however, why when the Pirahãs visited towns with me, one of their first questions was “Where is the river?” They needed to know how to orient themselves in the world!

Only years later did I read the fascinating research coming from the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, under the direction of Dr. Stephen C. Levinson. In studies from different cultures and languages, Levinson’s team discovered two broad divisions in the ways cultures and languages give local directions. Many cultures are like American and European cultures and orient themselves in relative terms, dependent on body orientation, such as left and right. This is called by some endocentric orientation. Others, like the Pirahas, orient themselves to objects external to their body, what some refer to as exocentric orientation.

“Don’t Sleep There Are Snakes” by Daniel Everett p. 216

Despite what some might characterize as simplicity, the verbs in the language display a remarkable complexity and nuance:

Although Pirahã nouns are simple, Pirahã verbs are much more complicated. Each verb can have as many as sixteen suffixes-that is, up to sixteen suffixes in a row. Not all suffixes are always required, however. Since a suffix can be present or absent, this gives us two possibilities for each of the sixteen suffixes-216 or 65,536, possible forms for any Pirahã verb. The number is not this large in reality because some of the meanings of different suffixes are incompatible and could not both appear simultaneously. But the number is still many times larger than in any European language. English only has in the neighborhood of five forms for any verb-sing, sang, sung, sings, singing. Spanish, Portuguese, and some other Romance languages have forty or fifty forms for each verb.

Perhaps the most interesting suffixes, however (though these are not unique to Pirahã), are what linguists call evidentials, elements that represent the speaker’s evaluation of his or her knowledge of what he or she is saying. There are three of these in Pirahã: hearsay, observation, and deduction…The placement of all the various suffixes on the basic verb is a feature of grammar. There are sixteen of these suffixes. Meaning plays at least a partial role in how they are placed. So, for example, the evidentials are at the very end because they represent a judgment about the entire event being described. DSTAS; pp. 196-197

This brings to mind a fascinating point that is not widely known: as material cultures become more complex, their languages actually become more simplified!

Comparing languages across differing cultures suggests an inverse relation between the complexity of grammar and the complexity of culture; the simpler the culture in material terms, the more complex the grammar. Mark Turin notes that colonial-era anthropologists set out to show that indigenous peoples were at a lower stage of evolutionary development than the imperial Western peoples, but linguistic evidence showed the languages of supposedly primitive peoples to have surprisingly complex grammar.

He writes: “Linguists were returning from the field with accounts of extremely complex verbal agreement systems, huge numbers of numeral classifiers, scores of different pronouns and nouns, and incredible lexical variation for terms that were simple in English. Such languages appeared to be untranslatable…­(p.17)…Thus the languages of simpler cultures tend to pack grammatical information into single words, whereas those of industrial society tend to use separate words in combination to create grammatical distinctions…(p.52)…In some languages, entire sentences are packed into a single word. Nicholas Evans and Stephen Levinson give the examples of Ęskakhǭna’tàyęthwahs from the Cayuga of North America, which means “I will plant potatoes for them again,” and abanyawoihwarrgahmarneganjginjeng from the Northern Australian language Bininj Gun-wok, and means “I cooked the wrong meat for them again.” (pp. 16-17)

“The Truth About Language” by Michael C. Corballis

Last time we referred to the substantial differences in behavior that were discovered by Joseph Henrich, et alia, between Western “WEIRD” cultures and, well, just about everyone else.

As Heine, Norenzayan, and Henrich furthered their search, they began to find research suggesting wide cultural differences almost everywhere they looked: in spatial reasoning, the way we infer the motivations of others, categorization, moral reasoning, the boundaries between the self and others, and other arenas. These differences, they believed, were not genetic.

The distinct ways Americans and Machiguengans played the ultimatum game, for instance, wasn’t because they had differently evolved brains. Rather, Americans, without fully realizing it, were manifesting a psychological tendency shared with people in other industrialized countries that had been refined and handed down through thousands of generations in ever more complex market economies.

When people are constantly doing business with strangers, it helps when they have the desire to go out of their way (with a lawsuit, a call to the Better Business Bureau, or a bad Yelp review) when they feel cheated. Because Machiguengan culture had a different history, their gut feeling about what was fair was distinctly their own. In the small-scale societies with a strong culture of gift-giving, yet another conception of fairness prevailed. There, generous financial offers were turned down because people’s minds had been shaped by a cultural norm that taught them that the acceptance of generous gifts brought burdensome obligations. Our economies hadn’t been shaped by our sense of fairness; it was the other way around.

The growing body of cross-cultural research that the three researchers were compiling suggested that the mind’s capacity to mold itself to cultural and environmental settings was far greater than had been assumed. The most interesting thing about cultures may not be in the observable things they do—the rituals, eating preferences, codes of behavior, and the like—but in the way they mold our most fundamental conscious and unconscious thinking and perception.

We Aren’t the World (Pacific Standard)

It brings to mind another old adage: “What we call human nature is really human habit.” That may not be true for everything, but it looks it may be true for at least some things.


Jaynes makes a great deal about the fact that the Greek language lacked any reference to an inner decision-making process (mind), or to any kind of “soul” apart from the body. When it isn’t locating the source of actors’ motivations in the gods speaking directly to them, it is locating it in various parts of the body or internal organs. The terms used in place of any kind of reference to mind or spirit are often body parts—heart, chest, lungs, liver, spleen, guts, and so on. These body parts later come to refer to a mind or soul (e.g. nous or psyche), but only much later. Psyche, for example, initially referred to ‘breath’, and nous (noos) referred to vision. Only much later do these words become associated with concepts of spirit, soul, or self. Put another, somewhat more precise, way by Brian McVeigh, “[L]inguo-conceptual changes [reflect] psychohistorical developments; because supernatural entities functioned in place of our inner selves, vocabularies for psychological terms were strikingly limited in ancient languages.” Jaynes writes:

There is in general no consciousness in the Iliad. I am saying ‘in general’ because I shall mention some exceptions later. And in general, therefore, no words for consciousness or mental acts. The words in the Iliad that in a later age come to mean mental things have different meanings, all of them more concrete. The word psyche, which later means soul or conscious mind, is in most instances life-substances, such as blood or breath: a dying warrior breathes out his psyche onto the ground or breathes it our in his last gasp.

The thumos, which later comes to mean something like emotional soul, is simply motion or agitation. When a man stop moving, the thumos leaves his limbs. But it is also somehow like an organ itself, for when Glaucus prays to Apollo to alleviate his pain and to give strength to help his friend Sarpedon, Apollo hears his prayer and “casts strength in his thumos“. The thumos can tell a man to eat, drink, or fight. Diomedes says in one place that Achilles will fight “when the thumos in his chest tells him to and a god rouses him.” But it is not really an organ and not always localized; a raging ocean has thumos.

A word of somewhat similar use is phren, which is always localized anatomically as the midriff, or sensations in the midriff, and is usually used in the plural. It is the phrenes of Hector that recognize that his brother is not near him; this means what we mean by “catching one’s breath in surprise”. It is only centuries later that it comes to mean mind or ‘heart’ in its figurative sense.

Perhaps most important is the word noos which, spelled as nous in later Greek, comes to mean conscious mind. It comes from the world noeein, to see. Its proper translation in the Iliad would be something like perception or recognition or field of vision. Zeus “holds Odysseus in his noos.” He keeps watch over him.

Another important word, which perhaps comes from the doubling of the word meros (part), is mermera, meaning in two parts. This was made into a verb by adding the ending -izo, the common suffix which can turn a noun into a verb, the resulting word being mermerizein, to be put into two parts about something. Modern translators, for the sake of supposed literary quality in their work, often use modern terms and subjective categories which are not true to the orignal. Mermerizein is thus wrongly translated as to ponder, to think, to be of divided mind, to be troubled about, to try to decide. But essentially it means to be in conflict about two actions, not two thoughts. It is always behavioristic. It is said several times of Zeus, as well as others. The conflict is often said to go on in the thumos, or sometimes in the phrenes, but never in the noos. The eye cannot doubt or be in conflict, as the soon-to-be-invented conscious mind will be able to.

These words are in general, and with certain exception, the closest that anyone, authors or characters or gods, usually get to having conscious minds or thoughts.

There is also no concept of will or word for it, the concept developing curiously late in Greek thought. Thus, Iliadic men have no will of their own and certainly no notion of free will. Indeed, the whole problem of volition, so troubling, I think, to modern psychological theory, may have had its difficulties because the words for such phenomena were invented so late.

A similar absence from Iliadic language is a word for body in our sense. The word soma, which in the fifth century B.C. comes to mean body, is always in the plural in Homer and means dead limbs or a corpse. It is the opposite of psyche. There are several words which are used for various parts of the body, and, in Homer, it is always these parts that are referred to, and never the body as a whole.

Now this is all very peculiar. If there is no subjective consciousness, no mind, soul, or will, in Iliadic men, what then imitates behavior? OoCitBotBM; pp. 69-71

Essentially, what Jaynes is doing is trying to use language to understand the consciousness of these ancient people, similar to what we saw anthropologists and linguists doing for the various remote and isolated cultures currently in existence. Their language may not dictate reality, but the words they use to describe their world offer a clue, perhaps the only clue, as to how they perceive themselves, their world, and their place in it; and how it might be different than our ego-driven point of view. After all, we can’t just hop in a time machine and head back to administer psychological tests.

P.S. As an aside to the idea of aural hallucinations, a fascinating study found that non-clinical voice hearers could distinguish “hidden speech” far more effectively than others. This is especially interesting since most studies featuring voice-hearers use the clinical (schizophrenic, epileptic, Parkinson’s, etc.) population, rather than ordinary people. The reasons for this ability are not known:

The study involved people who regularly hear voices, also known as auditory verbal hallucinations, but do not have a mental health problem. Participants listened to a set of disguised speech sounds known as sine-wave speech while they were having an MRI brain scan. Usually these sounds can only be understood once people are either told to listen out for speech, or have been trained to decode the disguised sounds.

Sine-wave speech is often described as sounding a bit like birdsong or alien-like noises. However, after training people can understand the simple sentences hidden underneath (such as “The boy ran down the path” or “The clown had a funny face”).

In the experiment, many of the voice-hearers recognised the hidden speech before being told it was there, and on average they tended to notice it earlier than other participants who had no history of hearing voices.The brains of the voice-hearers automatically responded to sounds that contained hidden speech compared to sounds that were meaningless, in the regions of the brain linked to attention and monitoring skills.

People who ‘hear voices’ can detect hidden speech in unusual sounds (Science Daily)

P.P.S. xkcd did a public survey on color perception and naming a while back:
https://blog.xkcd.com/2010/05/03/color-survey-results/
https://xkcd.com/color/rgb/

The Archaic Mentality

The inspiration for this series of posts was an article in Psychology Today entitled: Did Our Ancestors Think Like Us? I’m pretty confident that they didn’t, but in what sense did their differ? Were they as different as Jaynes described, or was it something less extreme?

Imagine that you are a time-traveler, able to travel back roughly 40,000 years to the age of the first anatomically modern homo sapiens. Imagine stepping out of your time machine and standing face to face with one of your ancestors: Another human with a brain just as big as yours, and genes virtually identical to your genes. Would you be able to speak to this ancient human? Befriend them? Fall in love with them? Or would your ancestor be unrecognizable, as distinct from you as a wolf is distinct from a pet dog?

…Some think that, since we have the same genes as ancient humans, we should show the same mannerisms. Others suspect that human psychology may have changed dramatically over time. Nobody definitely knows (I certainly don’t), but my hunch is that the human mind today works very differently than did our ancestor’s minds.

Did Our Ancestors Think Like Us? (Psychology Today)

Brian McVeigh sums up Jaynes’s ideas this way:

In The Origin of Consciousness in the Breakdown of the Bicameral Mind [Jaynes] argued that conscious subjective interiority was not a bioevolutionary phenomenon. Rather, interiority—and by this term he did not mean perceiving, thinking or reasoning but the ability to introspect and engage in self-reflectivity—emerged historically as a cultural construction only about three millennia ago.
The Psychohistory of Metaphors, Brian McVeigh p. 133

I would argue that there is recent psychological research that tentatively backs up some of Jaynes’ claims. New research has shown that a lot of what we thought was just “basic human cognition” turns out to be socioculturally constructed. Much of the world today does not think or reason in the same way as members of Western industrial societies do. The blogger writes:

Many animals learn how to solve problems by watching other animals try and fail, but humans appear to take social learning to another level: we learn how to think from one another.

Consider that when people move to a new culture, they actually begin taking on the emotions of that culture, reporting more everyday sadness in cultures that feel more sadness and surprise in cultures where people feel more surprise. Consider that people’s ability to read others’ thoughts and feelings from their behavior depends on the number of words in their native language indicating mental states. Consider that people’s level of prejudice towards other groups (i.e. the extent of their “us versus them” mentality) and moral convictions (i.e. their belief that some acts are fundamentally right or wrong) strongly depends on whether or not they follow an Abrahamic religion. And consider that people’s ability to think “creatively,” to generate new solutions that diverge from old ones, depends on how strictly their culture regulates social norms. This is just a small sampling from hundreds of studies that show how flexible the human mind is.

For a graphic example, it was recently determined that the “primitive” Himba of Namibia are actually more mental agile than supposedly “high IQ” Westerners at solving novel problems:

“We suggest that through formal education, Westerners are trained to depend on learned strategies. The Himba participate in formal education much less often and this is one possible reason why they exhibited enhanced cognitive flexibility,”

Cognitive neuroscientists observe enhanced mental flexibility in the seminomadic Himba tribe (PsyPost). He continues:

The second reality that makes me think our minds work differently today than they did thousands of years ago is that human culture is staggeringly diverse. We speak over 6,000 languages, follow 4,000 religions, and live our lives according to a sprawling set of social and moral customs. Some other animals have diverse culture: Chimpanzees, for example, forage for food in a number of different ways that are probably socially learned. But human cultural diversity goes beyond one or two kinds of differences; our cultures are different in almost every way imaginable. The development of this cultural diversity may have had a profound impact on our psychologies.

When you put these realities together, you have (a) an amazingly diverse species with (b) an amazing capacity to learn from diversity. Add thousands of years of development and cultural change to the mix and you likely get modern human thinking that scarcely resembles ancient human psychology. This doesn’t mean that today’s humans are “better” than yesterday’s; it just means that humans are fascinating animals, more cognitively malleable than any other.

The writer doesn’t get into more detail than that, and there aren’t any further explanations so far. But the idea was backed up by a landmark paper which came out a few years ago by was Joseph Henrich, along with Steven J. Heine and Ara Norenzayan. They write:

There are now enough sources of experimental evidence, using widely differing methods from diverse disciplines, to indicate that there is substantial psychological and behavioral variation among human populations.

The reasons that account for this variation may be manifold, including behavioral plasticity in response to different environments, divergent trajectories of cultural evolution, and, perhaps less commonly, differential distribution of genes across groups in response to different selection pressures… At the same time, we have also identified many domains in which there are striking similarities across populations. These similarities could indicate reliably developing pan-human adaptations, byproducts of innate adaptations (such as religion), or independent cultural inventions or cultural diffusions of learned responses that have universal utility (such as counting systems, or calendars)…

Not only aren’t Americans typical of how the rest of the world thinks, but Americans are shockingly different (surprising, huh?). As one writer put it, “Social scientists could not possibly have picked a worse population from which to draw broad generalizations. Researchers had been doing the equivalent of studying penguins while believing that they were learning insights applicable to all birds.”

As you might imagine, one of the major differences has to do with radical individualism. Americans see themselves as “rugged individualists,” whereas everyone else sees themselves as part of a larger social fabric:

[S]ome cultures regard the self as independent from others; others see the self as interdependent. The interdependent self — which is more the norm in East Asian countries, including Japan and China — connects itself with others in a social group and favors social harmony over self-expression. The independent self — which is most prominent in America — focuses on individual attributes and preferences and thinks of the self as existing apart from the group.

…Unlike the vast majority of the world, Westerners (and Americans in particular) tend to reason analytically as opposed to holistically. That is, the American mind strives to figure out the world by taking it apart and examining its pieces. Show a Japanese and an American the same cartoon of an aquarium, and the American will remember details mostly about the moving fish while the Japanese observer will likely later be able to describe the seaweed, the bubbles, and other objects in the background. Shown another way, in a different test analytic Americans will do better on…the “rod and frame” task, where one has to judge whether a line is vertical even though the frame around it is skewed. Americans see the line as apart from the frame, just as they see themselves as apart from the group.

Are Americans the Weirdest People on Earth? (Big Think)

As for why Americans, and WEIRD (Western, Educated, Industrial, Rich, Democratic) countries more generally, are so different than the rest of the world, the authors of the original paper speculate:

To many anthropologically-savvy researchers it is not surprising that Americans, and people from modern industrialized societies more generally, appear unusual vis-á-vis the rest of the species.

For the vast majority of its evolutionary history, humans have lived in small-scale societies without formal schools, government, hospitals, police, complex divisions of labor, markets, militaries, formal laws, or mechanized transportation. Every household provisioned much or all of their own food, made its own clothes, tools, and shelter, and–aside from various kinds of sexual divisions of labor–almost everyone had to master the same skills and domains of knowledge.

Children grew up in mixed age play groups, received little active instruction, and learned largely by observation and imitation. By age 10, children in some foraging societies obtain sufficient calories to feed themselves, and adolescent females take on most of the responsibilities of women.

WEIRD people, from this perspective, grow up in, and adapt, to a highly unusual environment. It should not be surprising that their psychological world is unusual as well. p. 38 (emphasis mine)

I wrote about this study back in 2013: Americans are WEIRD.

The differences in American thinking and the rest of the world seem to mirror the left brain/right brain split described by Ian McGilchrist:

The left hemisphere is dependent on denotative language, abstraction, yields clarity and power to manipulate things that are known and fixed. The right hemisphere yields a world of individual, changing, evolving, interconnected, living beings within the context of the lived world. But the nature of things is never fully graspable or perfectly known. This world exists in a certain relationship. They both cover two versions of the world and we combine them in different ways all the time. We need to rely on certain things to manipulate the world, but for the broad understanding of it, we need to use knowledge that comes from the right hemisphere.

A Psychiatrist Explains the Difference Between Left Brain and Right Brain (Hack Spirit)

Given that thousands of years ago, there were NO industrial countries with a majority of the population educated, wealthy, or literate, it’s pretty obvious that thinking must have been quite different. Of course, that does not prove Jaynes’s ideas. However, if even modern psychology researchers report substantial differences among existing populations, why it hard to believe that people separated from us by thousands of years in time are more different that us than alike?

It’s also worth pointing out that the fundamental structure of our brain changes in response to activities we undertake to navigate our environment. It’s been hypothesized that the use of the internet and ubiquitous computer screens are “rewiring” our brains in some, possibly nefarious, way. An article about this topic in the BBC points out that this is not new–everything we do rewires our brains in some way. In other words, we do not come into the world completely “done” – much of how our brains function is culturally determined. This, in turn, changes the brain’s structure. So we need not posit that somehow the brain architecture of bicameral people was radically different, only that they were using their brains in a different way as determined by the cultural context.

We regularly do things that have a profound effect on our brains – such as reading or competitive sports – with little thought for our brain fitness. When scientists look at people who have spent thousands of hours on an activity they often see changes in the brain. Taxi drivers, famously, have a larger hippocampus, a part of the brain recruited for navigation. Musicians’ brains devote more neural territory to brain regions needed for playing their instruments. So much so, in fact, that if you look at the motor cortex of string players you see bulges on one side (because the fine motor control for playing a violin, for example, is only on one hand), whereas the motor cortex of keyboard players bulges on both sides (because piano playing requires fine control of both hands).

Does the internet rewire our brains? (BBC Future)

In a book I cited earlier, Alone in the World? the author lists the items that archaeologists look for to indicate behavioral modernity (since culture is ephemeral and does not fossilize):

1. A spoken language;

2. The cognitive capacity to generate mental symbols, as expressed in art and religion;

3. Explicit symbolic behavior, i.e., the ability to represent objects, people, and abstract concepts with arbitrary symbols, vocal or visual, and to reify such symbols in cultural practices like painting, engraving, and sculpture;

4. The capacity for abstract thinking, the ability to act with reference to abstract concepts not limited to time and space;

5. Planning depth, or the ability to formulate strategies based on past experience and to act one them in group context;

6. Behavioral, economic, and technological innovation; and

7. A bizarre inability to sustain prolonged bouts of boredom.

Often people cite the spectacular cave art of Ice Age Europe as evidence that the people living in such caves must have been behaviorally modern. But consider that some of the most sought-after art in the twentieth century was made by patients suffering from schizophrenia (voice hearing)!

The Julian Jaynes Society has compiled a list of questions about the behavior of ancient peoples that are difficult to explain without recourse to some kind of bicameral theory. I’ve copied and abridged their list below:

1. The Saliency and “Normalcy” of Visions in Ancient Times. Why have hallucinations of gods in the ancient world been noted with such frequency?

2. The Frequency of “Hearing Voices” Today. Why do auditory hallucinations occur more frequently in the general population than was previously known? If hallucinations are simply a symptom of a dysfunctional brain, they should be relatively rare. Instead, they have been found in normal (non-clinical) populations worldwide.

3. Imaginary Companions in Children. Why do between one-quarter and one-third of modern children “hear voices,” called imaginary companions?

4. Command Hallucinations. Why do patients labeled schizophrenic, as well as other voice-hearers, frequently experience “command hallucinations” that direct behavior — as would be predicted by Jaynes’s theory? If hallucinations are simply a symptom of a dysfunctional brain, one would expect they would consist of random voices, not commentary on behavior and behavioral commands.

5. Voices and Visions in Pre-literate Societies. Why are auditory and visual hallucinations, as well as divination practices and visitation dreams, found in pre-literate societies worldwide?

6. The Function of Language Areas in the Non-Dominant Hemisphere. Why is the brain organized in such a way that the language areas of the non-dominant hemisphere are the source of auditory hallucinations — unless this provided some previous functional purpose?

7. The “Religious” Function of the Right Temporal Lobe. Why is right temporal lobe implicated in auditory hallucinations, intense religious sentiments, and the feeling of a sensed presence?

8. Visitation Dreams. Why do ancient and modern dreams differ so dramatically? Studies of dreams in classical antiquity show that the earliest recorded dreams were all “visitation dreams,” consisting of a visitation by a god or spirit that issues a command — essentially the bicameral waking experience of hearing verbal commands only during sleep. This has also been noted in tribal societies.

9. The Inadequacy of Current Thinking to Account for the Origin of Religion. Why are the worship of gods and dead ancestors found in all cultures worldwide?

10. Accounting for the Ubiquity of Divination. Similarly, why were divination practices also universal?

Jaynes’s theory of a previous bicameral mentality accounts for all of these phenomena, and, in the complete absence of persuasive alternative explanations, appears to be the best explanation for each of them. As one professor once said to me, “There is either Jaynes’s theory, or just ‘weird stuff happens.'”

Questions critics fail to answer (Julian Jaynes Society)

Weird stuff, indeed!!! But there is another, perhaps even more important question not listed above. That is, why did religious concepts change so profoundly during the Axial Age? As Joseph Henrich, the anthropologist whose paper we cited above put it:

“The typical evolutionary approaches to religion don’t take into account that the kinds of gods we see in religions in the world today are not seen in small-scale societies. I mentioned the ancestor gods; other kinds of spirits can be tricked, duped, bought off, paid; you sacrifice in order to get them to do something; they’re not concerned about moral behavior…Whatever your story is, it’s got to explain how you got these bigger gods.”

Joseph Henrich on Cultural Evolution, WEIRD Societies (Conversation with Tyler)

In researching these series of posts, I’m struck by just how big a gulf there is between (to use Evens-Pritchard’s terms) Primitive Religion and Revelatory Religion.

Primitive religion, for all its dramatic variance, appears to be centered around direct revelation from gods, ancestor worship, and communal rituals. It is almost always rooted in some kind of animist belief system, and is always polytheistic.

Revelatory religions, by contrast, tend to emphasize conscious control over one’s own personal behavior (e.g. the ‘Golden Rule’). They emphasize looking for revelation by introspection—going inward—something conspicuously missing from primitive religions. Instead of direct revelation, God’s words are now written down in holy books which are consulted to determine God’s will, permanent and unchanging. Monotheism takes over from polytheism. And a significant portion of the population, unlike in primitive societies, accepts no god at all [atheism = a (without) theos (gods)]. As Brian McVeigh writes, quoting St. Augustine, “By shifting the locus of ‘spiritual activity from external rites and laws into the individual, Christianity brought God’s infinite value into each person.’ In other words, a newly spiritualized space, first staked out by Greek philosophers, was meta-framed and expanded into an inner kingdom where individual and Godhead could encounter each other.” (Psychohistory of Metaphors, pp. 52-53)

For his part Henrich and other researchers hypothesize that the difference comes from the fact that Universal Religions of Revelation (so-called “Big Gods”) allowed for larger and more diverse groups of people to cooperate, thus outcompeting parochial deities who couldn’t “scale up.” Because the “Big Gods” were all-seeing, all-knowing, omnipresent, moralizing deities with the power to reward and punish in the afterlife, they argue, it kept people on the straight-and-narrow, allowing for more higher-level cooperation between unrelated strangers even without shared cultural context. Basically, it was a meme that evolved via group selection. As they put it (PDF): “[C]ognitive representations of gods as increasingly knowledgeable and punitive, and who sanction violators of interpersonal social norms, foster and sustain the expansion of cooperation, trust and fairness towards co-religionist strangers.”

I call this “The Nannycam theory of Religion”. As God remarked to Peter Griffin on Family Guy, “I’m kind of like a nannycam. The idea that I *may* exist is enough for some people to behave better.”

By contrast, the breakdown of the bicameral mind provides an explanation. God now becomes one’s own conscience—the inner voice in one’s head. We now become responsible for our own behavior through the choices we make. The revelatory religions serve as a guide, and a replacement for the voices that no longer issue their commands. As Brian McVeigh explains:

…interiority is unnecessary for most of human behavior. If this is true, why did we as a species develop it about three thousand years ago (at least according to Julian Jaynes)? What was its purpose?

From the perspective of a sociopolitical organization [sic], interiority alleviates the need for strict heirarchical lines of command and control, which are inherently fragile. By placing a personal tool kit of command and control “inside a person’s head,” interiority becomes society’s inner voice by proxy.

Authorization based on strict hierarchical lines of command and control may be efficient for relatively small, well-circumscribed communities, but if history is any teacher, clear lines of control become less cost-effective in terms of socioeconomic capital the larger and more complex organizations become.

One authorization for immediate control of self becomes interiorized and individual-centered, an organization actually becomes stronger as its orders, directives, doctrines, admonitions, and warnings become the subjective truths of personal commitment.

Interiority, then, is a sociopolitically pragmatic tool used for control in the same way assigning names to individuals or categorizing people into specialized groups for economic production is. From the individual’s perspective, interiority makes the social environment easier to navigate. Before actually executing a behavior, we can “see” ourselves “in our heads” carrying out an action, thereby allowing us to shortcut actual behavioral sequences that may be time-consuming, difficult, or dangerous.
Brian J. McVeigh; A Psychohistory of Metaphors, pp. 33-34

There are many more “conventional” explanations of the universality of religious beliefs. One popular theory is put forward by anthropologist Pascal Boyer in “Religion Explained.” Basically, he argues that religion is an unintended side effect of  what software programmers would refer to as “bugs” in the human cognitive process:

Basing his argument on this evolutionary reasoning, Boyer asserts that religion is in effect a cognitive “false positive,” i.e., a faulty application of our innate mental machinery that unfortunately leads many humans to believe in the existence of supernatural agents like gods that do not really exist.

This also leads Boyer to describe religious concepts as parasitic on ordinary cognitive processes; they are parasitic in the sense that religion uses those mental processes for purposes other than what they were designed by evolution to achieve, and because of this their successful transmission is greatly enhanced by mental capacities that are there anyway, gods or no gods.

Boyer judges the puzzling persistence of religion to be a consequence of natural selection designing brains that allowed our prehistoric ancestors to adapt to a world of predators. A brain molded by evolution to be on the constant lookout for hidden predators is likely to develop the habit of looking for all kinds of hidden agencies. And it is just this kind of brain that will eventually start manufacturing images of the concealed actors we normally refer to as “gods.”

In this sense, then, there is a natural, evolutionary explanation for religion, and we continue to entertain religious ideas simply because of the kinds of brains we have. On this view, the mind it takes to have religion is the mind we have…Religious concepts are natural both in the phenomenological sense that they emerge spontaneously and develop effortlessly, and in the natural sense that also religious imagination belongs to the world of nature and is naturally constrained by genes, central nervous systems, and brains.
J. Wentzel van Huyssteen; Alone In The World? pp. 261-263

Of course, as Jaynes would point out, the gods as depicted in ancient literature are hardly “hidden actors.” They often speak directly to individuals and issue commands which are subsequently obeyed! Massive amounts of time and effort are spent building temples to them. That seems like an awful lot of work to satisfy a simple “false positive” in human cognition.

Other theories focus on what’s called the Theory of Mind. For example: What Religion is Really All About (Psychology Today). As a Reddit commenter put it succinctly:

The basic thesis is that we believe in gods (or supernatural minds in general) because of cognitive adaptations that evolved for social interaction. It was evolutionarily advantageous for monkeys to construct mental models of what other monkeys were feeling/perceiving/thinking, and it’s a natural step from there to believing in disembodied minds, minds that can exist without the monkey. Related YouTube lecture: Why We Believe In Gods.

Testimony to the Sumerian worship of the Cookie Monster

Perhaps. But there are an awful lot of signs in the archaeological record that our ancestors thought very differently than we do, to wit:

1. Eye idols (see above)

2. “Goddess” figurines and idols Jaynes: “Figurines in huge numbers have been unearthed in most of the Mesopotamian cultures, at Lagash, Uruk, Nippur, and Susa. at Ur, clay figures painted in black and red were found in boxes of burned brick placed under the floor against the walls but with one end opened, facing into the center of the room. The function of all these figurines, however, is as mysterious as anything in all archaeology. The most popular view goes back to the uncritical mania with which ethnology, following Frazer, wished to find fertility cults at the drop of a carved pebble. But if such figurines indicate something about Frazerian fertility, we should not find them where fertility was no problem. But we do.” Origins, p. 166. As the old joke in archaeology goes, if you can’t explain something, just claim it was for ‘fertility.’

3. Human Sacrifice

4. Trepanation

5. God kings:
Jaynes: “I am suggesting that the dead king, thus propped up on his pillow of stones, was in the hallucinations of his people still giving forth his commands…and that, for a time at least, the very place, even the smoke from its holy fire, rising into visibility from furlongs around, was, like the gray mists of the Aegean for Achilles, a source of hallucinations and of the commands that controlled the Mesolithic world of Eynan.

This was a paradigm of what was to happen in the next eight millennia. The king dead is a living god. The king’s tomb is the god’s house…[which]…continues through the millennia as a feature of many civilizations, particularly in Egypt. But, more often, the king’s-tomb part of the designation withers away. This occurs as soon as successor to a king continues to hear the hallucinated voice of his predecessor during his reign, and designates himself as the dead king’s priest or servant, a pattern that is followed throughout ancient Mesopotamia. In place of the tomb is similarly a temple. And in place of the corpse is a statue, enjoying even more service and reverence, since it does not decompose.” Origins, pp. 142-43

6. Grave goods

7. Cannibalism

8. Veneration of ancestors

9. Mummification of animals

Not to mention things like this:

A common practice among these city dwellers [of Çatalhöyük] was burying their dead under their floors, usually under raised platforms that served as beds. Often they would dig up the skulls of the dead later, plaster their faces (perhaps to recreate the faces of loved ones), and give them to other houses. Archaeologists frequently find skeletons from several people intermingled in these graves, with skulls from other people added. Wear and tear on some plastered skulls suggest they were traded back and forth, sometimes for generations, before being reburied. According to Hodder, such special skulls are just as often female as they are male.

Incredible discovery of intact female figurine from neolithic era in Turkey (Ars Technica)

The Voices in Your Head

What If God Was One Of Us?

What We Talk About When We Talk About Consciousness

The Cathedral of the Mind

One of Oliver Sacks’ last popular books, published in 2012, was about hallucinations, titled, appropriately, Hallucinations. In it, he takes a look at numerous types of hallucinatory phenomena—hallucinations among the blind (Charles Bonnett Syndrome); sensory deprivation; delirium; grieving; Post-traumatic Stress Disorder; epilepsy; migraines; hypnagogia; Parkinson’s Disease, psychedelic usage; religious ecstasy; and so on.

There are a number of interesting facts presented about auditory hallucinations.
One is the fact that although auditory hallucination of voices is indeed indicative of schizophrenia, in most cases auditory hallucinations are experienced by perfectly normal people with no other signs of mental illness.

Sacks begins his chapter on auditory hallucinations by describing an experiment in 1973 where eight “fake” patients went to mental hospitals complaining of hearing voices, but displaying no other signs of mental illness or distress. In each case, they were diagnosed as schizophrenic (one was considered manic-depressive), committed to a facility for two months, and given anti-psychotic medication (which they obviously did not take). While committed, they even openly took notes on their experiences, yet none of the doctors or staff ever wised up to the ruse. The other patients, however, were much more perceptive. They could clearly see that the fake patients were not at all mentally ill, and even asked them, “what are you doing here?” Sacks concludes:

This experiment, designed by David Rosenhan, a Stanford psychologist (and himself a pseudopatient), emphasized, among other things, that the single symptom of “hearing voices” could suffice for an immediate, categorical diagnosis of schizophrenia even in the absence of any other symptoms or abnormalities of behavior. Psychiatry, and society in general, had been subverted by the almost axiomatic belief that “hearing voices” spelled madness and never occurred except in the context of severe mental disturbance. p. 54

While people often mischaracterize Jaynes’ theory as “everyone in the past was schizophrenic,” it turns out that even today most voices are heard by perfectly normal, otherwise rational, sane, high-functioning people. This has been recognized for over a century in medical literature:

“Hallucinations in the sane” were well recognized in the nineteenth century, and with the rise of neurology, people sought to understand more clearly what caused them. In England in the 1880s, the Society for Psychical Research was founded to collect and investigate reports of apparitions or hallucinations, especially those of the bereaved, and many eminent scientists—physicians as well as physiologists and psychologists—joined the society (William James was active in the American branch)…These early researchers found that hallucinations were not uncommon among the general population…Their 1894 “International Census of Waking Hallucinations in the Sane” examined the occurrence and nature of hallucinations experienced by normal people in normal circumstances (they took care to exclude anyone with obvious medical or psychiatric problems). Seventeen thousand people were sent a single question:

“Have you ever, when believing yourself to be completely awake, had a vivid impression of seeing or being touched by a living being or inanimate object, or of hearing a voice, which impression, as far as you could discover, was not due to an external physical cause?”

More than 10 percent responded responded in the affirmative, and of those, more than a third heard voices. As John Watkins noted in his book Hearing Voices, hallucinated voices “having some kind of religious or supernatural content represented a small but significant minority of these reports.” Most of the hallucinations, however, were of a more quotidian character. pp. 56-57

While the voices heard by schizophrenics are often threatening and controlling, the voices heard by most people do not appear to have any effect on normal functioning at all.

The voices that are sometimes heard by people with schizophrenia tend to be accusing, threatening, jeering, or persecuting. By contrast, the voices hallucinated by the “normal” are often quite unremarkable, as Daniel Smith brings out in his book Muses, Madmen, and Prophets: Hearing Voices and the Borders of Sanity. Smith’s own father and grandfather heard such voices, and they had different reactions. His father started hearing voices at the age of thirteen. Smith writes:

“These voices weren’t elaborate, and they weren’t disturbing in content. They issued simple commands. They instructed him, for instance, to move a glass from one side of the table to another or to use a particular subway turnstile. Yet in listening to them and obeying them his interior life became, by all accounts, unendurable.”

Smith’s grandfather, by contrast, was nonchalant, even playful, in regard to his hallucinatory voices. He described how he tried to use them in betting at the racetrack. (“It didn’t work, my mind was clouded with voices telling me that this horse could win or maybe this one is ready to win.”) It was much more successful when he played cards with his friends. Neither the grandfather nor the father had strong supernatural inclinations; nor did they have any significant mental illness. They just heard unremarkable voices concerned with everyday things–as do millions of others. pp. 58-59

To me, this sounds an awful lot like Jaynes’s descriptions of the reality of bicameral man doesn’t it? The voices command, and the people obey the commands. Yet they still are outwardly normal, functioning individuals. You may never know whether someone is obeying voices in their head unless they explicitly told you:

This is what Jaynes calls “bicameral mind”: one part of the brain (the “god” part) evaluates the situation and issues commands to the other part (the “man” part) in the form of auditory and, occasionally, visual hallucinations (Jaynes’ hypothesises that the god part must have been located in the right hemisphere, and the man part, in the left hemisphere of the brain). The specific shapes and “identities” of these hallucinations depend on the culture, on what Jaynes calls “collective cognitive imperative”: we see what we are taught to see, what our learned worldview tells us must be there.

Julian Jaynes and William Shakespeare on the origin of consciousness in the breakdown of bicameral mind (Sonnets in Colour)

In most of the cases described in Hallucinations, people didn’t attribute their auditory or visual hallucinations to any kind of supernatural entity or numinal experience. A few did refer to them as “guardian angels”. But what if they had grown up in a culture where this sort of thing was considered normal, if not commonplace, as was the case for most of ancient history?

Hearing voices occurs in every culture and has often been accorded great importance–the gods of Greek myth often spoke to mortals, and the gods of the great monotheistic traditions, too. Voices have been significant in this regard, perhaps more so than visions, for voices, language, can convey an explicit message or command as images alone cannot.

Until the eighteenth century, voices—like vision—were ascribed to supernatural agencies: gods or demons, angels or djinns. No doubt there was sometimes and overlap between such voices and those of psychosis or hysteria, but for the most part, voices were not regarded as pathological; if they stayed inconspicuous and private, they were simply accepted as part of human nature, part of the way it was with some people. p. 60

In the book The Master and His Emissary, Iain McGilchrist dismisses Jaynes’s theory by claiming that schizophrenia is a disease of recent vintage, and only emerged sometime around the nineteenth century. Yet, as the Jaynes foundation website points out (2.7), this is merely when the diagnosis of schizophrenia was established. Before that time, it would not have been considered pathological or a disease at all. We looked at how Akhnaten’s radical monotheism was possibly inspired by God “speaking” directly to him, issuing commands to build temples, and so forth. Certainly, he thought it was, at any rate. And he’s hardly alone. We’ve already looked at notable historical personages like Socrates, Muhammad, Joan of Arc, and Margery Kempe, and there are countless other examples. Schizophrenia is no more “new” than is PTSD, which was barely recognized until after Wold War One, where it was called “shellschock.”

“My Eyes in the Time of Apparition” by August Natterer. 1913

Another thing Sacks points out is that command hallucinations tend to occur in stressful situations or times of extreme duress, or when one has some sort of momentous or climactic decision to make, just as Jaynes posited. In times of stress, perfectly ordinary, sane people often hear an “outside” voice coming from somewhere guiding their actions. This is, in fact, quite common. In “normal” conditions we use instinct or reflex to guide our actions. But in emergencies, we hear a voice that seems to come from somewhere outside our own consciousness:

If, as Jaynes proposes, we take the earliest texts of our civilisation as psychologically valid evidence, we begin to see a completely different mentality. In novel and stressful situations, when the power of habit doesn’t determine our actions, we rely on conscious thinking to decide what to do, but, for example, the heroes of Iliad used to receive their instructions from gods — which would appear in the times of uncertainty and stress.

Julian Jaynes and William Shakespeare on the origin of consciousness in the breakdown of bicameral mind (Sonnet in Colour)

For example, Dr. Sacks is perfectly aware that his “inner monologue” is internally generated. Yet in a stressful situation, the voice became externalized—something that seemed to speak to him from some outside source:

Talking to oneself is basic to human beings, for we are linguistic species; the great Russian psychologist Lev Vygotsky thought that inner speech was a prerequisite of all voluntary activity. I talk to myself, as many of us do, for much of the day–admonishing myself (“You fool! Where did you leave your glasses?”), encouraging myself (“You can do it!”), complaining (“Why is that car in my lane?”) and, more rarely, congratulating myself (“it’s done!”). Those voices are not externalized; I would never mistake them for the voice of God, or anyone else.

But when I was in danger once, trying to descend a mountain with a badly injured leg, I heard an inner voice that was wholly unlike my normal babble of inner speech. I had a great struggle crossing a stream with a buckled and dislocating knee. The effort left me stunned, motionless for a couple of minutes, and then a delirious languor came over me, and I thought to myself, Why not rest here? A nap maybe? This was immediately countered by a strong, clear, commanding voice, which said, “You can’t rest here—you can’t rest anywhere. You’ve got to go on. Find a place you can keep up and go on steadily.” This good voice, the Life voice, braced and resolved me. I stopped trembling and did not falter again. pp. 60-61

Sacks gives some other anecdotal examples of people under extreme duress:

Joe Simpson, climbing in the Andes, also had a catastrophic accident, falling off an ice ledge and ending up in a deep crevasse with a broken leg. He struggled to survive, as he recounted in Touching the Void–and a voice was crucial in encouraging and directing him:

“There was silence, and snow, and a clear sky empty of life, and me, sitting there, taking it all in, accepting what I must try to achieve. There were no dark forces acting against me. A voice in my head told me that this was true, cutting through the jumble in my mind with its coldly rational sound.”

“It was as if there were two minds within me arguing the toss. The *voice* was clean and sharp and commanding. It was always right, and I listened to it when it spoke and acted on its decisions. The other mind rambled out a disconnected series of images, and memories and hopes, which I attended to in a daydream state as I set about obeying the orders of the *voice*. I had to get to the glacier….The *voice* told me exactly how to go about it, and I obeyed while my other mind jumped abstractly from one idea to another…The *voice*, and the watch, urged me into motion whenever the heat from the glacier halted me in a drowsy exhausted daze. It was three o’clock—only three and a half hours of daylight left. I kept moving but soon realized that I was making ponderously slow headway. It didn’t seem to concern me that I was moving like a snail. So long as I obeyed the *voice*, then I would be all right.”

Such voices may occur with anyone in situations of extreme threat or danger. Freud heard voices on two such occasions, as he mentioned in his book On Aphasia:

“I remember having twice been in danger of my life, and each time the awareness of the danger occurred to me quite suddenly. On both occasions I felt “this was the end,” and while otherwise my inner language proceeded with only indistinct sound images and slight lip movements, in these situations of danger I heard the words as if somebody was shouting them into my ear, and at the same time I saw them as if they were printed on a piece of paper floating in the air.

The fact that the gods tend to come to mortals in the Iliad during times of stress has been noted by author Judith Weissman, author of “Of two minds: Poets who hear voices”:

Judith Weissman, a professor of English at Syracuse University, notes that in the Iliad the gods speak directly to the characters over 30 times, often when the characters are under stress. Many of the communications are short, brief, exhortations. The most common godly command, issued when the men are fearful in battle, is to, “fight as your father did.” At one point in the Iliad, the god Apollo picks up Hektor, who has fallen in battle, and says, “So come now, and urge on your cavalry in their numbers / to drive on their horses against the hollow ships” (15.258-59)…

Personality Before the Axial Age (Psychology Today)

Hallucinations are also quite common in soldiers suffering from PTSD. If modern soldiers experience PTSD, how much more traumatic would be ancient battles, like those described so vividly in the Iliad? I can’t even imagine standing face-to-face with a foe, close enough to feel his hot breath, and having to shove a long, sharp metal object directly into his flesh without hesitation; blood gushing everywhere and viscera sliding out of his belly onto the dirt. And yet this was the reality of ancient warfare in the Bronze and Iron ages.  Not to mention the various plagues, dislocations, natural disasters, invasions, and other assorted traumatic events.

People with PTSD are also prone to recurrent dreams or nightmares, often incorporating literal or somewhat disguised repetitions of the traumatic experiences. Paul Chodoff, a psychiatrist writing in 1963 about the effects of trauma in concentration camp survivors, saw such dreams as a hallmark of the syndrome and note that in a surprising number of cases, they were still occurring a decade and half after the war. The same is true of flashbacks. p. 239

Veterans with PTSD may hallucinate the voices of dying comrades, enemy soldiers, or civilians. Holmes and Tinnin, in one study, found that the hearing of intrusive voices, explicitly or implicitly accusing, affected more than 65 percent of veterans with combat PTSD. p. 237 note 4

The other very common occurrence where otherwise “sane” people will often hallucinate sounds or images is during grief and bereavement. Sometimes this is just hearing the voice of the departed person speaking to them or calling them. Sometimes they may actually see the person. And sometimes they may even carry on extended conversations with their deceased family members!

Bereavement hallucinations, deeply tied to emotional needs and feelings, tend to be unforgettable, as Elinor S., a sculptor and printmaker, wrote to me:

“When I was fourteen years old, my parents, brother and I were spending the summer at my grandparents’ house as we had done for many previous years. My grandfather had died the winter before.”

“We were in the kitchen, my grandmother was at the sink, my mother was helping and I was still finishing dinner at the kitchen table, facing the back porch door. My grandfather walked in and I was so happy to see him that I got up to meet him. I said ‘Grampa,’ and as I moved towards him, he suddenly wasn’t there. My grandmother was visibly upset, and I thought she might have been angry with me because of her expression. I said to my mother that I had really seen him clearly, and she said that I had seen him because I wanted to. I hadn’t been consciously thinking of him and still do not understand how I could have seen him so clearly. I am now seventy-six years of age and still remember the incident and have never experienced anything similar.”

Elizabeth J. wrote to me about a grief hallucination experienced by her young son:

“My husband died thirty years ago after a long illness. My son was nine years old at the time; he and his dad ran together on a regular basis. A few months after my husband’s death, my son came to me and said that he sometimes saw his father running past our home in his yellow running shorts (his usual running attire). At the time, we were in family grief counselling, and when I described my son’s experience, the counsellor did attribute the hallucinations to a neurologic response to the grief. This was comforting to us, and I still have the yellow running shorts.” pp. 233-234

It turns out that this kind of thing is extremely common:

A general practitioner in Wales, W.D. Rees, interviewed nearly three hundred recently bereft people and found that almost half of them had illusions or full-fledged hallucinations of a dead spouse. These could be visual, auditory, or both—some of the people interviewed enjoyed conversations with their hallucinated spouses. The likelihood of such hallucinations increased with the length of the marriage, and they might persist for months or even years. Rees considered these hallucinations to be normal and even helpful in the mourning process. p. 234

A group of Italian psychological researchers published a paper in 2014 entitled “Post-bereavement hallucinatory experiences: A critical overview of population and clinical studies.” According to their paper, after an extensive review of peer-reviewed literature, they found that anywhere from 30 to 60 percent of grieving people experienced what they called “Post-bereavement hallucinatory experiences” (PBHEs). Is it any wonder why veneration of the dead was so common across cultures from the Old World to Africa to Asia to the Americas to Polynesia? It was almost universally assumed that the dead still existed in some way across ancient cultures. Some scholars such as Herbert Spencer posited that ancestor worship was the origin of all religious rites and practices.

What is the fundamental cause of all these aural hallucinations? As neurologist Sacks freely admits, the source of these phenomena is at present unknown and understudied. Sacks references Jaynes’s “Origin of Consciousness…” in his speculation on possible explanations:

Auditory hallucinations may be associated with abnormal activation of the primary auditory cortex; this is a subject which needs much more investigation not only in those with psychosis but in the population at large–the vast majority of studies so far have examined only auditory hallucinations in psychiatric patients.

Some researchers have proposed that auditory hallucinations result from a failure to recognize internally generated speech as one’s own (or perhaps it stems from a cross-activation with the auditory areas so that what most of us experience as our own thoughts becomes “voiced”).

Perhaps there is some sort of psychological barrier or inhibition that normally prevents most of us from “hearing” such inner voices as external. Perhaps that barrier is somehow breached or underdeveloped in those who do hear constant voices. Perhaps, however, one should invert the question–and ask why most of us do not hear voices.

In his influential 1976 book, The Origin of Consciousness in the Breakdown of the Bicameral Mind, speculated that, not so long ago, all humans heard voices–generated internally from the right hemisphere of the brain, but perceived (by the left hemisphere) as if external, and taken as direct communications from the gods. Sometime around 1000 B.C., Jaynes proposed, with the rise of modern consciousness, the voices became internalized and recognized as our own…Jaynes thought that there might be a reversion to “bicamerality” in schizophrenia and some other conditions. Some psychiatrists (such as Nasrallah, 1985) favor this idea or, at the least, the idea that the hallucinatory voices in schizophrenia emanate from the right side of the brain but are not recognized as one’s own, and are thus perceived as alien…It is clear that “hearing voices” and “auditory hallucinations” are terms that cover a variety of different phenomena. pp. 63-64

Recently, neuroscientists have hypothesized the existence of something called an “efference copy” which is made by the brain of certain types of stimulus. The presence of the effluence copy informs the brain that certain actions have originated from itself, and that subsequent inputs are self-generated. For example, the efference copy of your hand movements is what prevents you from tickling yourself. The existence of this efferance copy—or rather the lack thereof—has been postulated as the reason why schizophrenics can’t understand the voices in their heads as being their own. A temporary suppression of the efference copy may be behind why so many otherwise “sane” people often hear voices as something coming from outside their own mind.

Efference copy is a neurological phenomenon first proposed in the early 19th century in which efferent signals from the motor cortex are copied as they exit the brain and are rerouted to other areas in the sensory cortices. While originally proposed to explain the perception of stability in visual information despite constant eye movement, efference copy is now seen as essential in explaining a variety of experiences, from differentiating between exafferent and reafferant stimuli (stimulation from the environment or resulting from one’s own movements respectively) to attenuating or filtering sensation resulting from willed movement to cognitive deficits in schizophrenic patients to one’s inability to tickle one’s self.

Efference Copy – Did I Do That? Cody Buntain, University of Maryland (PDF)

I talk to myself all the time. The words I’m typing in this blog post are coming from some kind of “inner self.” But I feel like that inner voice and “me” are exactly the same, as I’m guessing you do too, and so do most of us “normal” people. But is that something inherent in the brain’s bioarchitecture, or is that something we are taught through decades of schooling and absorbing our cultural context? Might writing and education play an important role in the “breakdown” of bicameralism? We’ll take a look at that next time.

The version of bicameralism that seems most plausible to me is the one where the change is a memetic rather than evolutionary-genetic event. If it were genetic, there would still be too many populations that don’t have it, but whose members when plucked out of the wilderness and sent to university seem to think and feel and perceive the world the way the rest of us do.

But integrated, introspective consciousness could be somewhere between language and arithmetic on the list of things that the h. sapien brain has always been capable of but won’t actually do if you just have someone raised by wolves or whatnot. Language, people figure out as soon as they start living in tribes. Arithmetic comes rather later than that. If Jaynes is right, unicameral consciousness is something people figure out when they have to navigate an environment as complex as a bronze-age city, and once they have the knack they teach their kids without even thinking about it. Or other peoples’ kids, if they are e.g. missionaries.

At which point brains whose wiring is better suited to the new paradigm will have an evolutionary advantage, and there will be a genetic shift, but as a slow lagging indicator rather than a cause.

John Schilling – Comment (Slate Star Codex)

[I’m just going to drop this here—much of the cause of depression stems from negative self-talk (“It’s hopeless!;” Things will never get better;” “I’m worthless,”etc.). In such cases, this “inner voice,” rather than being encouraging, is a merciless hector to the depressed individual. As psychologists often point out to their patients, we would never talk to anyone else as callously we talk to ourselves. Why is that? And it seems interesting that there are no references to depression in bicameral civilizations as far as I know. Ancient literature is remarkably free of “despair suicides” (as opposed to suicides for other reasons such as defeat in battle or humiliation).]