The Recursive Mind (Review) – 5

Part 1
Part 2
Part 3
Part 4

So far, in our review of The Recursive Mind, we’ve discovered that recursive thinking lay behind such uniquely human traits as grammatical spoken language, mental time travel, Theory of Mind, higher-order religion, and complex kinship groups.


In the final section of the book, Michael C. Corballis ponders when, how, and why we may have acquired these recursive characteristics.

Whether or not recursion holds the key to the human mind, the question remains how we came to be the way we are–at once so dominant over the other apes in terms of behavior and yet so similar in genetic terms…In modern-day science, it is difficult to avoid the conclusion that the human mind evolved through natural selection, although as we have seen, some recent authors, including Chomsky, still appeal to events that smack of the miraculous—a mutation, perhaps, that suddenly created the capacity for grammatical language…But of course we do have to deal with the seemingly vast psychological distance between ourselves and our closest relatives, the chimpanzees and bonobos. p. 167

What follows is a quick tour through the major milestones in becoming human:

1. Walking/Running

The first and most important change was our standing upright and unique walking gait. While bipedalism may seem in retrospect to offer many obvious advantages, it turns out that it may have been not all that advantageous at all:

Bipedalism became obligate rather than facilitative from around two million years ago, and as we shall see, it was from this point that the march to humanity probably began. That is, our forebears finally gained the capacity to walk freely, and perhaps run, in open terrain, losing much of the adaptation to climb trees and move about in the forest canopy.

Even so, it remains unclear just why bipedalism was retained. As a means of locomotion on open terrain, it offers no obvious advantages. Even the knuckle-walking chimpanzee can reach speeds of up to 48 km per hour, whereas a top athlete can run at about 30 km per hour. Other quadrupedal animals, such as horses, dogs, hyenas, or lions, can easily outstrip us, if not leave us for dead. One might even wonder, perhaps, why we didn’t hop rather than stride, emulating the bipedal kangaroo, which can also outstrip us humans…The impression is that the two-legged model, like the Ford Edsel, was launched before the market was ready for it. Or before it was ready for the market, perhaps. pp. 185-186

Of course, bipedalism left the hands free for tool use, but tool use came much later in the human repertoire, so we couldn’t have evolved walking specifically for that. Persistence hunting also seems to have been a later adaptation, so it was also not likely a primary cause. Another possibility is carrying things, perhaps food or infants. Another possibility is language, which, as Corballis argued earlier, may have originated with hand gestures long before verbal communication. If that’s true, then the capacity for language—as opposed to speech—goes back very far in the human repetoire.

One interesting thing I didn’t know is that even though chimpanzees are by far our closest genetic relatives among the great apes, anatomically we are closer to orangutans. This means that our transition to bipedalism may have developed not from knuckle-walking, as commonly presumed, but from hand-assisted bipedalism, where we walked upright along horizontal branches in the forest canopy, supported by our arms. The knuckle-walking gait of our chimp/bonobo cousins may not derive from our common ancestor, but may have developed after the split from the human lineage as an alternative method of crossing open territory.

The most arboreal of the great apes is the orangutan, which is more distantly related to us than either the chimpanzee or gorilla. Nevertheless its body morphology is closer to that of the human than is that of chimpanzee or gorilla.

In the forest canopy of Indonesia and Malaysia, orangutans typically adopt a posture known as hand-assisted bipedalism, supporting themselves upright on horizontal branches by holding on to other branches, usually above their heads. They stand and clamber along the branches with the legs extended, whereas chimpanzees and gorillas stand and move with flexed legs. Chimpanzees and gorillas may have adapted to climbing more vertically angled branches, involving flexed knees and a more crouched posture, leading eventually to knuckle-walking as the forested environment gave way to more open terrain. If this scenario is correct, our bipedal stance may derive from hand-assisted bipedalism, going back some 20 million years. Knuckle-walking, not bipedalism, was the true innovation. p. 184

In an earlier post we mentioned an updated version of the Aquatic Ape Hypothesis (AAH), today called by some the “Littoral Hypothesis.” The following is taken from another book by Corballis entitled “The Truth About Language”, where he goes into more detail:

Another view is that our forebears inhabited coastal areas rather than the savanna, foraging in water for shellfish and waterborne plants, and that this had a profound influence on our bodily characteristics and even our brain size. This aquatic phase may perhaps have preceded a later transition to the savanna. p. 48…Superficially, at least, it seems to explain a number of the characteristics that distinguish humans from other living apes. These include hairlessness, subcutaneous fat, bipedalism, our large brains, and even language…p. 95…The ability to breathe voluntarily…was an adaptation to diving, where you need to hyperventilate before plunging and then holding your breath during the dive. The fine-motor control over lips, tongue, velum, and throat necessary for producing consonants evolved for the swallowing of soft, slippery foods such as mollusks without biting or chewing. Think of oysters and white wine. p. 162

Philip Tobias once suggested that the term aquatic ape should be dropped, as it had acquired some notoriety over some of its more extravagant claims. [Mark] Verhaegen suggests that the aquatic theory should really be renamed “the littoral theory,” because early Homo was not so much immersed in water as foraging on the water’s edge, diving or searching in shallow water, and probably also roaming inland. p. 162…Tobias died in 2011, but in a chapter published in 2011 he wrote: “In contrast with the heavy, earth-bound view of hominin evolution, an appeal is made here for students of hominin evolution to body up, lighten and leaven their strategy by adopting a far great [sic] emphasis on the role of water and waterways in hominin phylogeny, diversification, and dispersal from one watergirt [sic] milieu to others…p. 95… Even in its modern form the aquatic ape hypothesis (AAH) remains controversial. Verhaegen quotes Wikipedia as asserting that “there is no fossil evidence for the AAH”; he disagrees, citing evidence that “virtually all archaic Homo sites are associated with abundant edible shellfish…162

To me, one of the more convincing arguments for a water-related evolutionary path for humans is the idea of giving birth in water. This may have been the way we accommodated a larger skull size alongside a pelvis designed for upright walking. I got this idea when I read an article several years ago about a woman who gave birth in water. Her husband recorded the birth and put it on the internet. The clip went viral since the birth took place almost effortlessly, with none of the agonizing pain which normally accompanies hospital births. This would also explain why human babies alone among the primates know instinctively to hold their breath in water. Perhaps human females are ideally meant to give birth in water—a practice that is once again achieving popularity in some alternative circles. Here’s the story:

Mum’s water birth video stuns the internet (BBC)

At this time, human ancestors adopted a “dual-mode” existence, using facilitative bipedalism, to migrate across the rapidly expanding grasslands, but retaining the ability to take refuge in the tree canopy if needed. This is evident from the anatomy of the various species of  Australopithecus, who were able to walk upright, but retained hands, feet and arms that allowed them to climb trees and hang from branches. These chimp-sized animals may have adopted a scavenging mode of existence to get their daily protein; cracking bones with stones to get at marrow inside, and stealing eggs from nests, while retreating back into the forest canopy when faced with fiercer competition. The most famous Australopithecus, Lucy, appears to have died by falling out of a tree. They may have done this activity during the heat of the day, leading to the gradual loss of hair and addition of sweat glands, since many big predators are crepuscular or nocturnal. They were already fairly social animals, and when threatened, they may have responded by banding together and hurling stones at their predators and rivals when threatened, as explored further below.

2. Throwing

Humans are able to throw projectiles with much greater force and accuracy than any other primate, or really any other animal. Of course, other monkeys and chimps do hurl things (such as poo) when they are upset. But in humans, this ability evolved to its furthest extent.

We probably first began by throwing rocks and stones, a technique that is far more effective when done in groups, as William Von Hippel noted in The Social Leap. From there, we graduated to spears, javelins, and boomerangs, and then invented devices to further enhance our throwing capacity such as spear-throwers (woomeras) and slings. Slings continued to be devastating weapons on the battlefield well into historical times–during Roman times, the most famous and effective slingers came from the Balearic Islands in the western Mediterranean, and were widely employed as mercenaries.

Paul Bingham has argued that one of the characteristics that have reinforced social cohesion in humans is the ability to kill at a distance. Human societies can therefore be rid of dissenters in their midst, or threats from the outside, with relatively little threat of harm to the killer! Nevertheless the dissenters, or the rival band, may themselves resort to similar tactics, and so began an arms race that has continued to this day. It started, perhaps, with the throwing of rocks, followed in succession by axes, spears, boomerangs, bows and arrows, guns, rockets, bombs, and nuclear missiles, not to mention insults. Such are the marks of human progress…

Whether or not it was throwing that sustained bipedalism in an increasingly terrestrial existence, it does at least illustrate that bipedalism frees the hands for intentional potentially skilled action. It allows us to use our hands for much more than specifically chucking stuff about. Moreover, our primate heritage means that our arms are largely under intentional control, creating a new potential for operating on the world, instead of passively adapting to it. Once freed from locomotory duty, our hands and arms are also free to move in four-dimensional space, which makes them ideal signaling systems for creating and sending messages… p. 190

The idea that we evolved to throw also helps explain the mystery of the seeming perfection of the human hand, described by Jacob Bronowski as “the cutting edge of the mind.” Adding to the power and precision of throwing are the sensitivity of the fingers, the long opposable thumb, and the large area of the cortex involved in control of the hand. The shape of the hand evolved in ways consistent with holding and hurling rocks of about the size of modern baseballs or cricket balls, missile substitutes in modern pretend war. In real war, hand grenades are about the same size.

Our hands have also evolved to provide two kinds of grip, a precision grip and a power grip, and Richard W. Young suggests that these evolved for throwing and clubbing, respectively. Not only do we see young men throwing things about in sporting arenas, but we also see them wielding clubs, as in sports such as baseball, cricket, hockey, or curling. In various forms of racquet sports, the skills of clubbing and throwing seem to be combined. p. 188

3. Extended Social Groups

The last common ancestor of humans and chimps may have lived in the Pliocene Epoch which began some 5.333 million years ago. As it ended, it transitioned into the Pleistocene, which was characterized by a rapidly changing climate which featured a recurring series of crippling ice ages. The Pleistocene lasted from about 2.588 million years ago, to roughly 12,000 years ago, succeeded by the more stable climate of the Holocene. It was during the Pleistocene era that woodlands shrank and were replaced by open grasslands. During this time, the genus Homo emerged, and human ancestors permanently transitioned from an arboreal existence to becoming savanna-based hunter-gatherers, possibly beginning as scavengers.

Adaptability was key. Any species which relied solely on the slow pace of genetic evolution would have been at a severe disadvantage in the rapidly changing world of the Pleistocene. The newly-evolved Homo genus, with its omnivorous diet, free hands for tool use, upright gait, large brains, and gregarious social nature, was ideally suited for this epoch. During this time, walking went from facilitative to obligatory, and we left the tree canopy behind for good. Homo habilis was already using stone tools near the beginning of this era (although earlier Australopithecines may have used some tools as well). Then came a plethora of upright-walking, tool-using apes: Homo rudolfensis; Homo ergaster; and the always-popular with schoolchildren Homo erectus.

Humans have been using tools for 300,000 years longer than we thought (io9)

Coming from the forest and moving out onto the savanna, these apes could not compete on speed, strength or aggressiveness against the big predators. What did they do? The solution was to form larger and more tightly-knit social groups. It is these social groups that are thought to have been a primary driver behind increasing intelligence and brain expansion (about which, more below).

An especially dangerous feature of the savanna was the presence of large carnivorous animals, whose numbers peaked in the early Pleistocene. They included at least 12 species of saber-tooth cats and nine species of hyena. Our puny forebears had previously been able to seek cover from these dangerous predators in more forested areas, and perhaps by retreating into water, but such means of escape were relatively sparse on the savanna. Not only did the hominins have to avoid being hunted down by these professional killers, with sharp teeth and claws, and immense speed and strength, but they also had to compete with them for food resources. p. 192

It was human intelligence and sociability that allowed our ancestors to survive in this threatening environment—a combination of man’s “intellectual powers,” and “social qualities,” as Charles Darwin put it.

The hominins therefore built on their primate inheritance of intelligence and social structure rather than on physical attributes of strength or speed. This is what might be termed the third way, which was to evolve what has been termed the “cognitive niche,” a mode of living built on social cohesion, cooperation, and efficient planning. It was a question of survival of the smartest. p. 194

It was not only our social nature, but our unique social strategy which is different from all other primates. Simply put, we developed extended families. We also developed cooperative child-rearing and pair-bonding, which allowed us to evolve larger social groups that other primates, who remain largely in fixed-size groups throughout their lives and do not typically develop deep relationships outside of it.

Sarah Blaffer Hrdy was argued that social bonding evolved first in the context of child rearing, She points out that great apes are loathe to allow others to touch their infants during the first few months, whereas human mothers are very trusting in allowing others to carry and nurture their babies. This is evident not only in daycare centers, but in extended families units [sic] that characterize many peoples of the world. Among New Zealand Maori, for instance, initial teaching and socialization is based on a larger unit known as whanau, which is the extended family, including children, parents, grandparents, cousins, uncles, aunts, and often beyond. The understanding of whanau is recursive, looping back many generations. p. 194

It takes a village indeed!

This puts paid to all seventeenth-century English Liberal notions of government that rely on “voluntary associations” or purposeful submission to a despot in exchange for protection and order. Governments did not form in order to secure “private property” as John Locke argued, nor were early societies a “war of all against all” as Hobbes thought—we would have gone extinct long ago if that were the case. It is private property, not social organization, which is novel in the human experience. Since extended families and kinship groups predate the genus Homo, the fact is that we humans have never had to make any sort of conscious, rational decision to form complex social groups—we are literally born into them! The question is, rather, how such groups evolved from the small tribal societies of the past into today’s large, impersonal, market-based nation-states.

4. Controlled use of Fire

At some point, humans harnessed fire, the only species (that we know of ) to do so. To get a bit more technical, we harnessed a source of extrasomatic energy, later supplemented by fossil fuels. Exactly when this occurred, however, is a matter of debate. Fire does not fossilize, and while the results of ancient combustion can be detected, it is often difficult to determine whether these were natural or artificially-controlled fires. Rather, arguments for a very archaic use of fire come primarily from human anatomy—humans are adapted to cooked food and cannot survive on strictly raw food diets, unlike chimps and gorillas. This leads to the conclusion that humans have been using fire long enough to evolve a dependency on it—certainly for hundreds of thousands of years, at least. Our small jaws, duller teeth, shorter intestines, and bulbous skulls all derive from anatomical changes due to cooking. Some recent evidence has suggested fire use over one million years ago. It indicates that that sitting around a campfire and telling stories has been part of social bonding since time immemorial.

Richard Wrangham has suggested that the secret of hominin evolution originated in the controlled use of fire, which supplied warmth and protection from hostile predators. From around two million years ago, he thinks, Homo erectus also began to cook tubers, greatly increasing their digestibility and nutritional value. Cooked potatoes, I’m sure you will agree, are more palatable than raw ones. Other species may have been handicapped because they lacked the tools to dig for tubers, or the means to cook them.

Cooked food is softer, leading to the small mouths, weak jaws, and short digestive system that distinguish Homo from earlier hominins and other apes. Cooking also led to division of labor the sexes, with women gathering tubers and cooking them while the men hunted game. At the same time, these complementary roles encouraged pair bonding, so that the man can be assured of something to eat if his hunting expedition fails to produce meat to go with the vegetables. p. 195

5. Rapid Brain Expansion

During the Pleistocene, the human brain underwent a remarkable and unprecedented expansion for reasons that are still debated. From Australopithecenes to archaic humans, the brain roughly tripled in size. The size of the brain is correlated roughly to an organism’s body size. This is known as the encephalization quotient. Given human’s relatively small body size, our brains are much larger than they “should” be. It’s also an energy hog, taking up some 20 percent of our metabolism to keep running.

Fossil evidence shows that brain size remained fairly static in the hominins for some four million years after the split from the apes. For example, Australopithecus Afarensis…had a brain size of about 433 cc, slightly over the chimpanzee size of about 393 cc, but less than that of the much larger gorilla at 465 cc. It was the emergence of the genus Homo that signaled the change. Homo habilis and Homo rudolfensis were still clumsily bipedal but their brains ranged in size from around 500 CC to about 750 CC, a small increase over that of earlier hominins. Homo ergaster emerged a little over 1.8 million years ago, and by some 1.2 million years ago boasted a brain size a some 1,250 cc. Thus in a space of about 750,000 years, brain size more than doubled—that’s pretty quick on an evolutionary time scale.

Brain size continued to increase at a slower rate. It appears to have reached a peak, not with Homo sapiens, dating from about 170,000 years ago, but with Neanderthals…In some individual Neanderthals, brain capacity seems to have been as high as 1,800 cc, with an average of about 1,450 cc. Brain size in our own species, Homo sapiens, is a little lower, with a present-day average f about 1,350 cc, but still about three times the size expected of an ape with the same body size…this final increase in brain size—the dash for the summit as it were—seems to have coincided with an advance in technological innovation over that which had prevailed for the previous 1.5 million years. pp. 198-199

It’s not just the expansion of the brain that is remarkable, but the expansion of the neocortex, or “outer shell” of the brain where many of the “higher” cognitive functions reside. Here, too, we find that the human neocortex is much larger than expected given the size of the brain and body. The size of the neocortex is roughly correlated with intelligence and the size of the social group in mammals, giving us some indication of the intelligence and group size of early human ancestors. “Humans have the largest neocortical ratio, at 4.1, closely followed by the chimpanzee at 3.2. Gorillas lumber in at 2.65, orangutans at 2.99, and gibbons at 2.08. According to the equation relating group size to neocortical ratio, humans should belong to groups of 148, give or take about 50. This is reasonably consistent with the estimated sizes of early Neolithic villages.” (p. 198)

Robin Dunbar has suggested that even though Neanderthal brains were larger overall than Homo sapiens, more of their neocortex was devoted to visual processing—their skulls indicate eyes that were 20 percent larger than our own. This was an adaptation to the darkness of the northern climes. The running of this enlarged visual system, he argues, precluded parts of the brain from being harnessed for other uses—social uses in particular. Thus, Neanderthals were not able to develop larger groupings, or things such as higher-order religions and recursion, he argues. Homo sapiens, evolving in the more tropical regions of Africa, did not have this same handicap.

Perhaps the most extraordinary revelation from this chapter is that there appear to be significant genetic changes to the brain within recorded history!

We are beginning to learn something of the genetic changes that gave us our swollen heads. One gene known to be a specific regulator of brain size is the abnormal spindle-like microcephaly associated (ASPM) gene, and the evidence suggests strong positive selection of this gene in the lineage leading to Homo sapiens. Indeed, a selective sweep appears to have occurred as recently as 5,800 years ago, suggesting that the human brain is still undergoing rapid evolution. Another gene known as microcephalin (MCPH6) has also been shown to regulate brain size, and one variant in modern humans arose an estimated 37,000 years ago. Other genes involved in the control of brain size that have undergone accelerated rates of protein evolution at points in human lineage have also been identified. p. 199

What’s most extraordinary about this information, given our discussion of Julian Jaynes’s theories, is that the evidence above indicates a selective sweep of genes that affect brain development in exactly the time-frame specified by Jaynes—something that Jaynes’s critics have always claimed was patently impossible! Of course, this does not mean that these genes are what lay behind his hypothesized development of “consciousness”—only that it is possible that there were indeed changes to how the brain functions within recorded history.

Often it’s claimed that the breakdown of the bicameral mind was due to a massive change in the brain’s architecture. Critics mistakenly assert that Jaynes implied that the corpus callosum—the massive bundle of nerves that connects the two hemispheres—evolved during historical times. But Jaynes claims nothing of the sort! While he discusses split-brained patients (with a severed corpus callosum) in order to understand the separate functions of each hemisphere, nowhere does he imply any recent anatomical changes to the brain’s basic structure. And, besides, the fact that hearing voices is common in humans today, indicates that such a massive change is not needed in any case. Rather, only a slight change in perception was required. Jaynes suggests that it was an inhibition in communication between Broca’s and Wernicke’s areas, which are connected by the anterior commisure, which might have contributed to the breakdown of the bicameral mind. There is also evidence that the amount of “white matter” in the brain (as contrasted with gray matter), changes brain function, and abnormalities in white matter have been associated with schizophrenia and other mental illnesses. We have no idea whether the genes specified above had anything to do with this, of course. But preliminary data show that this gene does not affect IQ, so it was not raw intelligence which cause the selective sweep of the ASPM gene. Could this gene have altered some of the functioning of the brain much in the manner Jaynes described, and did this give rise to the recursive “self” developing and expanding sometime during the late Bronze Age? Here we can only speculate.

Here’s anthropologist John Hawks explaining the significance of this discovery:

Haplogroup D for Microcephalin apparently came under selection around 37,000 years ago (confidence limit from 14,000 to 60,000 years ago). This is very, very recent compared to the overall coalescence age of all the haplotypes at the locus (1.7 million years). Some populations have this allele at 100 percent, while many others are above 70 or 80 percent. Selection on the allele must therefore have been pretty strong to cause this rapid increase in frequency. If the effect of the allele is additive or dominant, this selective advantage would be on the order of 2 or 3 percent — an advantage in reproduction.

The story for ASPM is similar, but even more extreme. Here, the selected allele came under selection only 5800 years ago (!) (confidence between 500 and 14,100 years). Its proliferation has almost entirely occurred within the bounds of recorded history. And to come to its present high proportion in some populations of near 50 percent in such a short time, its selective advantage must have been very strong indeed — on the order of 5 to 8 percent. In other words, for every twenty children of people without the selected D haplogroup, people with a copy of the allele averaged twenty-one, or slightly more.

Recent human brain evolution and population differences (john hawks weblog)

In a bizarre Planet of the Apes scenario, Chinese scientists have recently inserted human genes related to brain growth and cognition into moneys in order to determine what role genes play in the evolution of intelligence:

Human intelligence is one of evolution’s most consequential inventions. It is the result of a sprint that started millions of years ago, leading to ever bigger brains and new abilities. Eventually, humans stood upright, took up the plow, and created civilization, while our primate cousins stayed in the trees.

Now scientists in southern China report that they’ve tried to narrow the evolutionary gap, creating several transgenic macaque monkeys with extra copies of a human gene suspected of playing a role in shaping human intelligence.

“This was the first attempt to understand the evolution of human cognition using a transgenic monkey model,” says Bing Su, the geneticist at the Kunming Institute of Zoology who led the effort…

…What we know is that our humanlike ancestors’ brains rapidly grew in size and power. To find the genes that caused the change, scientists have sought out differences between humans and chimpanzees, whose genes are about 98% similar to ours. The objective, says, Sikela, was to locate “the jewels of our genome”—that is, the DNA that makes us uniquely human.

For instance, one popular candidate gene called FOXP2—the “language gene” in press reports—became famous for its potential link to human speech. (A British family whose members inherited an abnormal version had trouble speaking.) Scientists from Tokyo to Berlin were soon mutating the gene in mice and listening with ultrasonic microphones to see if their squeaks changed.

Su was fascinated by a different gene: MCPH1, or microcephalin. Not only did the gene’s sequence differ between humans and apes, but babies with damage to microcephalin are born with tiny heads, providing a link to brain size. With his students, Su once used calipers and head spanners to the measure the heads of 867 Chinese men and women to see if the results could be explained by differences in the gene.

By 2010, though, Su saw a chance to carry out a potentially more definitive experiment—adding the human microcephalin gene to a monkey…

Chinese scientists have put human brain genes in monkeys—and yes, they may be smarter (MIT Technology Review)

One of the more remarkable theories behind brain growth argues that a virus, or perhaps even symbiotic bacteria, helped along human brain growth, and hence, intelligence. Robin Dunbar raises the intriguing possibility that brain expansion was fuelled by a symbiotic alliance with the tuberculosis bacterium!

The problem of supporting a large brain is so demanding that it may have resulted in the rather intriguing possibility that we used external help to do so in the form of the tuberculosis bacterium. Although TB is often seen as a terrible disease, in fact only 5 per cent of those who carry the bacterium are symptomatic, and only a proportion of those die (usually when the symptoms are exacerbated by poor living conditions). In fact, the TB bacterium behaves much more like a symbiont than a pathogen – even though, like many of our other symbionts, it can become pathogenic under extreme conditions. The important issue is that the bacterium excretes nicotinamide (vitamin B3), a vitamin that turns out to be crucial for normal brain development. Chronic shortage of B3 rapidly triggers degenerative brain conditions like pellagra. The crucial point here is that vitamin B3 is primarily available only from meat, and so a supplementary source of B3 might have become desirable once meat came to play a central role in our diet. Hunting, unlike gathering, is always a bit chancy, and meat supplies are invariably rather unpredictable. This may have become even more crucial during the Neolithic: cereals, in particular, are poor in vitamin B3 and a regular alternative supply might have become essential after the switch to settled agriculture.

Although it was once thought that humans caught TB from their cattle after domestication around 8,000 years ago, the genetic evidence now suggests that human and bovine TB are completely separate strains, and that the human form dates back at least 70,000 years. If so, its appearance is suspiciously close to the sudden upsurge in brain size in anatomically modern humans that started around 100,000 years ago. Human Evolution: Our Brains and Behavior by Robin Dunbar; pp. 248-249

6. Childhood

It’s not just brain size. The human brain undergoes an unusually large amount of development after birth, unlike most other species, even other great apes. Other great apes don’t have things like extended childhoods and adolescence. It leads to the helplessness and utter dependency of our infants in the near term, but it has a big payoff in social adaptability in the long term. It means that humans’ intellectual capabilities can be—to a large extent—shaped by the environment they are born into, rather than just genes. The brain is “wired up” based on the needs of the environment it is born into. This affects things like language and sociability. This is key to what we saw above: adaptability and behavioral flexibility were the key to our species’ success.

Another critical difference between humans and other primates lies in the way in which the human brain develops from birth to adulthood. We humans appear to be unique among our fellow primates, and perhaps even among the hominins, in passing through four developmental stages–infancy, childhood, juvenality, and adolescence…During infancy, lasting from birth to age two and a half, infants proceed from babbling to the point that they know that words or gestures have meaning, and can string them together in two-word sentences. This is about the level that the bonobo Kanzi has reached…it is the next stage, childhood, that seems to be especially critical to the emergence of grammatical language and theory of mind…Childhood seems to be the language link that is missing in great apes and the early hominins, which may account for the fact that, so far at least, great apes have not acquired recursive grammar. But it is also during childhood that theory of mind, episodic memory, and understanding of the future emerge. Childhood may be the crucible of the recursive mind.

During the juvenile phase, from age 7 to around 10, children begin to appreciate the more pragmatic use of language, and how to use language to achieve social ends. The final stage is adolescence, which…is…unique to our own species, and sees the full flowering of pragmatic and social function, in such activities as storytelling, gossip, and sexual maneuvering. Adolescence also has a distinctive effect on male speech, since the surge of testosterone increases the length and mass of the vocal folds, and lowers the vibration frequency…

Locke and Bogin focus on language, but the staged manner in which the brain develops may account more generally for the recursive structure of the human mind. Recursive embedding implies hierarchical structure, involving metacontrol over what is embedded in what, and how many layers of embedding are constructed. Early development may establish basic routines that are later organized in recursive fashion.
pp. 201-203

I’ve always been struck by how children who are more intellectually precocious tend to take longer to mature—they are “late bloomers.” In contrast, there are those who mature very quickly and then hit a plateau. Of course, we lump them all together in prison-like schools according to chronological age , despite highly variable developmental speed and gender differences. This leads to all sorts of  bulling and abuse, as the faster-developing “jocks” torment the slower-developing “nerds”—a feature unique to modern industrial civilization. The emotional scarring resulting from this scenario causes incalculable amounts of suffering and misery, but I digress…

Human children are the most voracious learners planet Earth has ever seen, and they are that way because their brains are still rapidly developing after birth. Neoteny, and the childhood it spawned, not only extended the time during which we grow up but ensured that we spent it developing not inside the safety of the womb but outside in the wide, convoluted, and unpredictable world.

The same neuronal networks that in other animals are largely set before or shortly after birth remain open and flexible in us. Other primates also exhibit “sensitive periods” for learning as their brains develop, but they pass quickly, and their brain circuitry is mostly established by their first birthday, leaving them far less touched by the experiences of their youth.

Based on the current fossil evidence, this was true to a lesser extent of the 26 other savanna apes and humans. Homo habilis, H. ergaster, H. erectus, even H. heidelbergensis (which is likely the common ancestor of Neanderthals, Denisovans, and us), all had prolonged childhoods compared with chimpanzees and gorillas, but none as long as ours. In fact, Harvard paleoanthropologist Tanya Smith and her colleagues have found that Neanderthals reversed the trend. By the time they met their end around 30,000 years ago, they were reaching childbearing age at about the age of 11 or 12, which is three to five years earlier than their Homo sapiens cousins…

We are different. During those six critical years, our brains furiously wire and rewire themselves, capturing experience, encoding and applying it to the needs of our particular life. Our extended childhood essentially enables our brains to better match our experience and environment. It is the foundation of the thing we call our personalities, the attributes that make you you and me me. Without it, you would be far more similar to everyone else, far less quirky and creative and less, well … you. Our childhood also helps explain how chimpanzees, remarkable as they are, can have 99 percent of our DNA but nothing like the same level of diversity, complexity, or inventiveness.

The Evolution of childhood: Prolonged development helped Homo sapiens succeed (Slate)

7. Tool Use

Humans used tools largely in the context of hunting and butchering large prey. Humans probably also used tools to secure other resources, such as the digging up of tubers mentioned earlier. Gourds and eggshells are used by foragers to carry water. Slings may have been used for rocks from a long time ago. In the book The Artificial Ape, archaeologist Timothy Taylor argued that humans must have used baby slings—probably made from animal pelts—to carry their infants as far back as a million years ago. He makes this case since infants cannot walk effectively for the first few years of their life, and since early humans were constantly on the move, mothers must have had some way of efficiently carrying their offspring that left their hands free (other apes cling to mother’s hair—not an option for us). He argues that the sling is critical to allowing our infants to be born as helpless as they are, and thus facilitated the extended infancy described above. Fire may have also been a useful tool—many cultures around the world have used it to reshape the natural landscape and drive game.

When looking at the long arc of history, what stands out is not so much the rapidity of cultural change, but rather just how slow tool use and development was over millions of years. While today we are used to rapid, constant technological change, during the Pleistocene toolkits often remained unchanged for hundreds of thousands of years. So much for innovation!

Nevertheless advances in toolmaking were slow. There is little to suggest that the early hominins were any more adept at making or using tools than are present-day chimpanzees, despite being bipedal, and it was not really until the appearance of the genus Homo that toolmaking became more sophisticated.

The earliest such tools date from about 2.5 million years ago, and are tentatively associated with H, rudolfensis. These tools, relatively crude cutters and scrapers, make up what is known as the Oldowan industry. A somewhat more sophisticated tool industry, known as the Acheulian industry, dates from around 1.6 million years ago in Africa, with bifacial tools and hand-axes…The Acheulian industry remained fairly static for about 1.5 million years, and seems to have persisted in at least one human site dating from only 125,000 years ago. Nevertheless, there was an acceleration of technological invention from around 300,000 to 400,000 years ago, when the Acheulian industry gave way to the more versatile Levallois technology. Tools comprising combinations of elements began to appear, including axes, knives and scrapers mounted with hafts or handles, and stone-tipped spears. John F. Hoffecker sees the origins of recursion on these combinatorial tools, which were associated with our own forebears, as well as with the Neanderthals, who evolved separately from around 700,000 years ago. pp. 205-206

Corballis speculates that the rapid tool advancement seen in more recent Homo sapiens owes its origins more to our evolved social capabilities than to developments resulting from the primitive crude stone tools of our earlier ancestors: “My guess is that recursive thought probably evolved in social interaction and communication before it was evident in the material creations of our forebears. The recursiveiness and generativity of technology, and of such modern artifacts as mathematics, computers, machines, cities, art, and music, probably owe their origins to the complexities of social interaction and story telling, rather than to the crafting of tools…” (p. 206)

The full flowering of stone tool technology came during a period called the Upper Paleolithic, or Late Stone Age, also associated with such behaviorally modern artifacts as sculptural “Venus” figurines, cave paintings, and deliberate burials (indicating some rudimentary religious belief). The adoption of such “modern” behavioral traits, and the adoption of vastly more sophisticated tools is related, argues Corballis:

This second wave of innovation was most pronounced in Europe and western Asia, beginning roughly when Homo sapiens arrived there. The Upper Paleolithic marked nearly 30,000 years of almost constant change, culminating in a level of modernity equivalent to that of many present-day indigenous peoples. Technological advances included clothing, watercraft, heated shelters, refrigerated storage pits, and bows and arrows. Elegant flutes made from bone and ivory have been unearthed in southwest Germany, dated at some 40,000 years ago, suggesting early musical ensembles…Flax fibers dating from 30,000 years ago have been found in a cave in Georgia, and were probably used in hafting axes and spears, and perhaps to make textiles; and the presence of hair suggests also that they were used to sew clothes out of animal skins. The people of this period mixed chemical compounds, made kilns to fire ceramics, and domesticated other species.

Stone tools date from over two million years ago, but remained fairly static until the Upper Paleolithic, when they developed to include more sophisticated blade tools, as well as burins and tools for grinding. Tools were also fashioned from other materials, such as bone and ivory, and included needles, awls, drills, and fishhooks…p. 214

8. Migration

The general consensus today is that all modern humans are descended from groups that left Africa after 70,000 years ago, perhaps driven by climate change. These migrants eventually displaced all earlier species of archaic Homo. We also know that  some interbreeding between our ancestors and these other species took place. Humans carry DNA signatures from Neanderthals, Denisovans, and an as-yet-undiscovered human ancestor.

Evolutionary biologists have classified six major haplogroups of humans: L0, L1, L2, L3, M and N. A haplogroup is a large grouping of haplotypes, which are groups of alleles (variant forms of a gene) inherited from a single parent. In this case, geneticists used mitochondrial DNA, which is inherited exclusively from our mothers, to specify the haplogroups. Mitochondria—the “battery” of the cell, began its existence as symbiotic bacteria, and thus has a distinct genetic signature. Of the four “L” haplogroups, only L3 migrated out of Africa. The M and N haplogroups are a descendants of the L3 haplogroup. Haplogroup M has a more recent common ancestor than haplogroup N, and is found both inside and outside Africa. All indigenous lineages outside of Africa are derived from the M and N haplogroups, exclusively.

Why haplogroup L3 alone migrated out of Africa is a big question. Another related big question for evolutionary biologists is, how much of modern human behavior existed in Africa prior to this outmigration, and how much of human behavior arose after it? For example, did complex spoken language evolve before or after we left Africa? What about symbolic thought, art, religion, and sophisticated tool use? Did we use fire? Given that fact that sapiens displaced all the earlier hominins who had evolved outside of the continent (most likely from Homo heidelbergensis, and perhaps a few remote branches of erectus), we must have had some kind of innate advantage over the native inhabitants, the thinking goes. What exactly it was has proved harder to determine, but recursion might well be the answer.

The population of the earliest lineage, L0, is estimated to have expanded through the period 200,000 to 100,000 years ago. . . The L0 and L1 lineages exist at higher frequencies than the other lineages among present-day hunter-gatherers, who may therefore offer a window into the early history of Homo sapiens…The L3 lineage is of special interest, because it expanded rapidly m size from about 60,000 to 80,000 years ago, and seems to have been the launching pad for the migrations out of Africa that eventually populated the globe. Of the two non-African lineages that are the immediate descendants of L3, lineage M is estimated to have migrated out of Africa between 53,000 and 69,000 years ago, and lineage N between 50,000 and 64,000 years ago.

Why did L3 expand so rapidly, and migrate from Africa? One suggestion 1s that L3 gained some cultural advantage over the other lineages, perhaps through the invention of superior technologies, and that this gave them the means to migrate successfully. Paul Mellars suggests that the African exodus was predated by advances in toolmaking, including new stone-blade technologies, the working of animal skins, hafted implements, and ornaments. Some of the improvements in tool technology can be attributed to the use of fire to improve the flaking properties of stone, which dates from around 72,000 years ago in the south coast of Africa…

It need not follow that the L3 people were biologically more advanced than their African cousins, and it may well be that the exodus was driven by climate change rather than any technical superiority of L3 over the other haplogroups that remained in Africa. During the last ice age, there were a series of rapid climate swings known as Heinrich events. One of these events, known as H9, seems to have occurred at the time of the exodus from Africa, and was characterized by cooling and loss of vegetation, making large parts of North, West, and East Africa inhospitable for human occupation. It may also have been accompanied by a drop in sea levels, creating a land bridge into the Levant. So out of Africa they went, looking no doubt for greener pastures.

The exodus seems to have proceeded along the coast of the Red Sea, across the land bridge, and then round the southern coasts of Asia and southeast Asia, to reach New Guinea and Australia by at least 45,000 years ago. Mellars notes similarities in artifacts along that route as far as India, but remarks that technology seems to have declined east of India, especially in Australia and New Guinea. This may be attributable, he suggests, to the lack of suitable materials, adaptation to a more coastal environment requiring different technologies, and random fluctuations (cultural drift). A remarkable point of similarity, though, is the presence of red ochre in both Africa and in the earliest known human remains in Australia. Ochre was probably used in ritualistic body-painting, and perhaps in painting other surfaces. pp. 209-211

9. The Rise of Agriculture

Of course, the wild climate swings of the Pleistocene era eventually came to and end giving way to the more climatically stable (to date) Holocene epoch. As the Last Glacial Maximum (LGM) came to a close, the earth underwent a massive de-glaciation, sending massive amounts of cold, fresh water into the world’s oceans. Sea levels rose, and many land areas became submerged, such as Berinigia (isolating the Americas), Doggerland (isolating Britain) and the Sahul Shelf (isolating Australasia). The melting glaciers caused the climate to undergo a rapid shift once again, killing off large numbers of the megafauna that earlier humans had relied on as their primary food source—animals such as the Wooly mammoth and ground sloth. The vast herds of reindeer that had provided sustenance for Paleolithic Europeans retreated northwards with the receding taiga, and southern Europe became heavily forested with larch and birch trees. In reaction, many human ancestors found themselves living in forests and grasslands once gain, relying more and more on smaller, more solitary prey animals, and plant foods such as fruits, seeds and nuts—a change sometimes referred to as the Broad Spectrum Revolution.

We know that the domestication of cereals dates from about 10-12,000 years ago in the Fertile Crescent—present-day Iraq, Syria, Lebanon, Israel, Kuwait, Jordan, southeastern Turkey and southwestern Iran. What’s less clear, however, is just how long these plants were cultivated before we decided to grow them intensively enough to alter their DNA to the point where these plants became dependent upon us (and we upon them). Recent evidence keeps pushing this horticultural activity—”proto-farming”—further and further back into the past, suggesting that agriculture is less of an anomaly or innovation than formerly thought. It apparently coexisted for a long time along seasonal hunting and foraging in the Near East. In addition, it appears that other desirable plant foods like figs and legumes were cultivated long before cereal grains. In some cases, the Neolithic Revolution appears to have been actively resisted for as long as possible by many cultures.

After the colder, dryer Younger Dryas period ended about 12,000 years ago, humans began to settle down in various grassy foothills and river valleys around the world and more intensively cultivate plant foods—especially cereal crops—which began the long march toward civilization, for better or worse.

10. Final Conclusions

Corballis strongly argues here (as he has in several books) that language originated with gestures, possibly before human ancestors migrated from Africa. Verbal speech, by contrast, came about much later, and may originate sometime after the exodus from Africa—perhaps as recently as 50-60,000 years ago based on anatomical evidence.

He argues that second- or perhaps third-order recursion was sophisticated enough to account for many of the types of behaviors we see in archaic humans (such as cooperative hunting and rudimentary religious beliefs), but that higher levels of recursive thought were inaccessible to them. These, he says, are unique to Homo sapiens, and may have begun as recently as 30,000 years ago during the Upper Paleolithic era, but we don’t know for sure.

He argues that these recursive abilities were mainly the result of human social needs, which then exploded into other diverse areas such as art, music, religion, and—perhaps most significantly—grammatical language, which can combine recursively to form an infinite number of ideas and concepts. Much later, things like advanced technology, science and mathematics flowed from these same recursive abilities as human societies grew ever larger and more complex. Humans’ ability to plan for and anticipate alternative futures is far more sophisticated than in any other species.

These recursive abilities also gave us the ability to know what others are thinking, leading directly to cumulative memetic evolution—passing down ideas and concepts, and adding to and extending them over time. No other species can do this as we can. Recursive thought also gave birth to mental time travel, allowing human thought to roam both the past and the future, and imagine alternative futures, or even fictional ones—i.e. stories, which bind human societies together.  Stories gave rise to more complicated social groups which are recursively nested in expanding circles of kinship and affiliation.

By looking at simpler examples from around the animal kingdom, Corballis argues that the development of these abilities was not a sudden, random and inexplicable event as some have argued. Rather, he says, it was the natural outcome of the same evolutionary processes that led to all the other mental and physical abilities that make us unique in the animal kingdom:

In this book, I have tried to argue that recursion holds the key to that difference in mind, underlying such uniquely such uniquely human characteristics as language, theory of mind, and mental time travel. It was not so much a new faculty, though, as an extension of existing faculties…there is no reason to suppose that the recursive mind evolved in some single, miraculous step, or even that it was confined to our species. Instead, it was shaped by natural selection, probably largely during the last two million years. p. 226

Although recursion was critical to the evolution of the human mind…it is not a “module,” the name given to specific, innate functional units, many of which are said to have evolved during the Pleistocene. Nor did it depend on some specific mutation, or some special kind of neuron, or the sudden appearance of a new brain structure. Rather, recursion probably evolved through progressive increases in short-term memory and capacity for hierarchical organization. These in turn were probably dependent on brain size, which increased incrementally, albeit rapidly, during the Pleistocene. But incremental changes can lead to sudden more substantial jumps, as when water boils or a balloon pops. In mathematics, such sudden shifts are known as catastrophes, so we may perhaps conclude that emergence of the human mind was catastrophic. p. 222

I have argued…that the extension of recursive principles to manufacture and technology was made possible largely through changes in the way we communicate. Language evolved initially for the sharing of social and episodic information, and depended at first on mime, using bodily movements to convey meaning. Through conventionalization, communication became less mimetic and more abstract. In the course of time it retreated into the face and eventually into the mouth, as late Homo gained voluntary control over voicing and the vocal tract, and the recursive ability to create infinite meaning through combinations of articulate sounds. This was an exercise in miniaturization, releasing the rest of the body, as well as recursive principles, for manipulation of the physical environment.

The complexities of the modern world are not of course the product of individual minds. Rather, they are the cumulative products of culture. Most of us have no idea how a jet engine, or a computer, or even a lightbulb, actually works. We all stand on the shoulders of giants…pp. 223-224

This concludes my review of The Recursive Mind by Michael C. Corballis. I hope you’ve enjoyed it and learned something new along the way.

The Recursive Mind (Review) – 4


Part 1
Part 2
Part 3

3. Theory of Mind

Now, I know that you’re thinking. All this stuff about recursion and Julian Janes is a little bit tedious. I’m not interested at all. Why does he keep talking about this stuff, anyway? Jayne’s ideas are clearly preposterous–only an idiot would even consider them. I should quit reading, or maybe head over to Slate Star Codex or Ran Prieur, or maybe Reddit or Ecosophia or Cassandra’s Legacy or…

How do I know what you’re thinking (correctly or not)? It’s because I have a Theory of Mind (ToM), which allows me to imagine and anticipate what other people are thinking. So do you most likely, which is why you can detect a degree of self-deprecation in my statements above.

Theory of mind is the ability to infer the mental states of other people. It’s often referred to a a sort of “mind-reading.” Daniel Dennett called it the “intentional stance,” meaning that we understand that other people have intentions and motivations that are different from our own. It evolved because we have lived in complex societies that require cooperation and intelligence for millions of years. “According to the intentional stance, we interact with people according to what we think is going on in their minds, rather than in terms of their physical attributes…” (p 137)

The lack of understanding of other people’s perspectives is what Jean Piaget noticed most in children. Central to many of his notions is the idea that children are egocentric, where their own needs and desires are all that exists: “During the earliest stages the child perceives things like a solipsist who is unaware of himself as subject and is familiar only with his own actions.” In other words, the child is unable to recognize that other people have thoughts or feeling different from (or even in conflict with) their own. They are also unaware that others cannot see the same thing that they do. One way to test theory of mind in children is called the Sally-Anne test:

Click to enlarge. Then hit “back”

Theory of mind is also something that helps teach and learn. In order for me to effectively teach you, I need to have some idea of what you’re thinking so I can present the material in a way you can understand it. And, or course, you need to have some idea of what’s going on in my mind to understand what I’m trying to teach you. Theory of mind, therefore, is related to cultural transmission (or, more precisely, memetics). Human culture plays such an outsize role in our behavior partly because of our theory of mind. Theory of mind is a also a recursive operation which involves embedding your consciousness into someone else’s conscious mind:

From the point of view of this book, the important aspect of theory of mind is that it is recursive. This is captured by the different orders of intentionality… Zero-order intentionality refers to actions or behaviors that imply no subjective state, as in reflex or automatic acts. First-order intentionality involves a single subjective term, as in Alice wants Fred to go away. Second-order intentionality would involve two such terms, as in Ted thinks Alice wants Fred to go away. It is at this level that theory of mind begins.

And so on to third order: Alice believes that Fred thinks she wants him to go away. Recursion kicks in once we get beyond the first order, and our social life is replete with such examples. There seems to be some reason to believe, though, that we lose track at about the fifth or sixth order, perhaps because of limited working memory capacity rather than any intrinsic limit on recursion itself. We can perhaps just wrap our minds around propositions like: Ted suspects that Alice believes that he does indeed suspect that Fred thinks that she wants him (Fred) to go away. That’s fifth order, as you can tell by counting the words in bold type. You could make it sixth order by adding ‘George imagines that…’ at the beginning. p. 137

Clearly, higher orders of intentionality have been driven by the demands of the social environment one finds oneself in. I will later argue that these higher-order intentionalities developed when we moved to environments where the challenges we faced were predominantly natural (finding food, escaping predators, etc.), to one where the challenges were primarily social (managing workers, finding mates, leading armies, long-distance trading, negotiating debts, etc.). This change resulted in a fundamental remodeling of the human brain after settled civilization which allowed us to function in such social environments, probably by affecting the action of our serotonin receptors. We’ll get to that later.

Do you see what she sees?

Its not only one’s mental perspective, but even one’s physical perspective that ToM can let us take:

Whether instinctive or learned, the human ability to infer the mental states of others goes well beyond the detection of emotion. To take another simple and seemingly obvious example, we can understand what another individual can see. This is again an example of recursion, since we can insert that individual’s experience into our own. It is by no means a trivial feat, since it requires the mental rotation and transformation of visual scenes to match what the other person can see, and the construction of visual scenes that are not immediately visible.

For example, if you are talking to someone face-to-face, you know that she can see what is behind you, though you can’t. Someone standing in a different location necessarily see the world from a different angle, and to understand that person’s view requires and act of mental rotation and translation. pp. 134-135

I suspect this ability has something to do with out-of-body experiences, where we “see” ourselves from the perspective of somewhere outside our bodies. Recall Jaynes’s point that the “self” is not truly located in anywhere in physical space–including behind the eyes. Thus our “self” can theoretically locate itself anywhere, including the ceiling of our hospital room when we are dying.

Not everyone has theory of mind, though, at least not to the same degree. One of the defining characteristics of the autism spectrum is difficulty with ToM. Autistic people tend to not be able to infer what others are thinking, and this leads to certain social handicaps. Corballis makes a common distinction between “people-people” (as in, I’m a “people-person”–avoid anyone who describes themselves this way), and “things-people”, exemplified by engineers, doctors, scientists, programmers, professors, and such-like. “People-people” typically have a highly-developed ToM, which facilitates their feral social cunning. Technically-minded people, by contrast, often (though not always) have a less-developed theory of mind, as exemplified by this quote from the fictional Alan Turing in The Imitation Game: “When people talk to each other, they never say what they mean. They say something else and you’re expected to just know what they mean.”

Research has found autistic people who ace intelligence tests may still have trouble navigating public transportation or preparing a meal. Scoring low on a measure of social ability predicts an incongruity between IQ and adaptive skills. (Reddit)

One fascinating theory of autism that Corballis describes is based on a conflict between the mother’s and the father’s genes imprinting on the developing fetus in the womb:

In mammalian species, the only obligatory contribution of the male to the offspring is the sperm, and the father relies primarily on his genes to influence the offspring to behave in ways that support his biological interest.

Paternal genes should therefore favor self-interested behavior in the offspring, drawing on the mother’s resources and preventing her from using resources on offspring that might have been sired by other fathers. The mother, on the other hand, has continuing investment in the child both before birth…and after birth…Maternal genes should therefore operate to conserve her resources, favoring sociability and educability—nice kids who go to school and do what they’re told.

Maternal genes are expressed most strongly in the cortex, representing theory of mind, language, and social competence, whereas paternal genes tend to be expressed more in the limbic system, which deals with resource-demanding basic drives, such as aggression, appetites, and emotion. Autism, then, can be regarded as the extreme expression of paternal genes, schizophrenia as the extreme expression of maternal genes.

Many of the characteristics linked to the autistic and psychotic spectra are physical, and can be readily understood in terms of the struggle for maternal resources. The autistic spectrum is associated with overgrowth of the placenta, larger brain size, higher levels of growth factors, and the psychotic spectrum with placental undergrowth, smaller brain size, and slow growth…

Imprinting may have played a major role in human evolution. One suggestion is that evolution of the human brain was driven by the progressive influence of maternal genes, leading to expansion of the neocortex and the emergence of recursive cognition, including language and theory of mind. The persisting influence of paternal genes, though, may have preserved the overall balance between people people and things people, while also permitting a degree of difference.

Simon Baron-Cohen has suggested that the dimension can also be understood along an axis of empathizers versus systemizers. People people tend to empathize with others, through adopting the intentional stance and the ability to take the perspective of others. Things people may excel at synthesizing, through obsessive attention to detail and compulsive extraction of rules… pp. 141-142

I think this partly explains the popularity of libertarian economics among a certain set of the population, especially in Silicon Valley where people high on the autism spectrum tend to congregate. They tend to treat people as objects for their money-making schemes. They are unable to understand that people are not rational robots, and thus completely buy into the myth of Homo economocus. Their systemizing brains tend to see the Market as a perfect, frictionless, clockwork operating system (if only government “interference” would get out of the way, that is). It also explains why they feel nothing toward the victims of their “creative destruction.” It’s notable that most self-described libertarians tend to be males (who are often more interested in “things” and have a less developed theory of mind in general). In addition, research has shown that people who elect to study economics professionally have lower levels of empathy than the general population (who then shape economic theory to conform to their beliefs). This should be somewhat concerning, since economics, unlike physics or chemistry or meteorology, concerns people.

This sort of calculating self-centered hyper-rationality also lays behind the capitalist ethos.

The dark side of theory of mind is, of course, the ability to manipulate others. This has been referred to as Machiavellian intelligence, after Niccolo Machiavelli, the Italian diplomat who wrote about how rulers can manipulate the ruled to keep them in awe and obedience. It is certain that Machiavelli had a well-developed theory of mind, because he wrote stuff like this: “Now, in order to execute a political commission well, it is necessary to know the character of the prince and those who sway his counsels; … but it is above all things necessary to make himself esteemed, which he will do if he so regulates his actions and conversation that he shall be thought a man of honour, liberal, and sincere…It is undoubtedly necessary for the ambassador occasionally to mask his game; but it should be done so as not to awaken suspicion and he ought also to be prepared with an answer in case of discovery.” (Wikiquote) . In fact, CEO’s and middle-managers tend to be consummate social manipulators—it’s been shown using psychological tests that successful CEO’s and politicians consistently score higher on traits of sociopathy than the general population.

There may be a dark side to social intelligence, though, since some unscrupulous individuals may take advantage of the cooperative efforts of others, without themselves contributing. These individuals are known as freeloaders. In order to counteract their behavior, we have evolved ways of detecting them. Evolutionary psychologists refer to a “cheater-detection module” in the brain that enables us to detect these imposters, but they in turn have developed more sophisticated techniques to escape detection.

This recursive sequence of cheater detection and cheater detection-detection has led to what has been called a “cognitive arms race,” perhaps first identified by the British evolutionary theorist Robert Trivers, and later amplified by other evolutionary psychologists. The ability to take advantage of others through such recursive thinking has been termed Machiavellian intelligence, whereby we use social strategies not merely to cooperate with our fellows, but also to outwit and deceive them…p. 136

It’s been argued (by me, for instance) that a hyperactive “cheater detection module,” often allied with lower levels of empathy, is what lays behind politically conservative beliefs. I would posit, too, that it also underlies many of the misogynistic attitudes among the so-called “Alt-Right”, since their theory of mind is too poorly developed to understand women’s thinking well enough to have positive interactions with them (instead preferring submission and obedience). A tendency toward poor ToM, in my opinion, explains a lot of seemingly unrelated characteristics of the Alt-right (economic libertarianism, misogyny, racism, technophilia, narcissism, atheism, hyper-rationality, ultra-hereditarianism, “political incorrectness” etc.)

Theory of mind appears to be more developed among women than men, probably because of their childrearing role. Many men can relate to the hyperactive tendency of their wives or girlfriends to “mind read” (“What are you thinking right now?”) and claim that they are correct in their inferences (“I know you’re thinking about your ex..!”).

Theory of Mind has long been seen as fundamental to the neuroscience of religious belief. The ability to attribute mental states to other people leads to attributing human-like attributes and consciousness to other creatures, and even things. I’ve you’ve ever hit your computer for “misbehaving” or kicked your car for breaking down on you, then you know what I’m talking about. The tendency to anthropomorphize is behind the misattribution of human traits and behaviors to non-human animals, viz:

According to Robin Dunbar, it is through Theory of Mind that people may have come to know God, as it were. The notion of a God who is kind, who watches over is, who punishes, who admits us to Heaven if we are suitably virtuous, depends on the underlying understanding that other beings—in this case a supposedly supernatural one—can have human-like thoughts and emotions.

Indeed Dunbar argues that several orders of intentionality may be required, since religion is a social activity, dependent on shared beliefs. The recursive loops that are necessary run something like this: I suppose that you think that I believe there are gods who intend to influence our futures because they understand our desires. This is fifth-order intentionality. Dunbar himself must have achieved sixth-order intentionality if he supposes all of this, and if you suppose that he does then you have reached seventh-order…

If God depends on theory of mind, so too, perhaps, does the concept of the self. This returns us to the opening paragraph of this book, and Descartes famous syllogism “I think , therefore I am.” since he was appealing to his own thought about thinking, this is second-order intentionality. Of course, we also understand the self to continue through time, which requires the (recursive) understanding that our consciousness also transcends the present. pp. 137-138 (emphasis mine)

Thus, higher-order gods tend to emerge at a certain point of socio-political complexity, where higher-order states of mind are achieved by a majority of people. A recent paper attempted to determine whether so-called “Moralizing High Gods” (MHG) and “Broad Supernatural Punishers” (BSP) is what allowed larger societies to form, or were rather the result of larger societies and the need to hold them together. The authors concluded the latter:

Do “Big Societies” Need “Big Gods”? (Cliodynamica)

Moralizing Gods as Effect, Not Cause (Marmalade)


Here’s evolutionary psychologist Robin Dunbar explaining why humans appear to be the only primates with the higher-order intentionality necessary to form Moralizing High Gods and Broad Supernatural Punishers:

We know from neuroimaging experiments that mentalizing competencies correlate with the volume of the mentalizing network in the brain, and especially with the volume of the orbitofrontal cortex, and this provides important support for the claim that, across primates, mentalizing competencies correlate with frontal lobe volume. Given this, we can…estimate the mentalizing competencies of fossil hominins, since they must, by definition, be strung out between the great apes and modern humans…As a group, the australopithecenes cluster nicely around second-order intentionality, along with other great apes; early Homo populations all sit at third-order intentionality, while archaic humans and Neanderthals can just about manage fourth order; only fossil [Anatomically Modern Humans] (like their living descendants) achieve fifth order. Human Evolution: Our Brains and Behavior by Robin Dunbar, p. 242

… The sophistication of one’s religion ultimately depends on the level of intentionality one is capable of. While one can certainly have religion of some kind with third or fourth order intentionality, there seems to be a real phase shift in the quality of religion that can be maintained once one achieves fifth order intentionality. Given that archaic humans, including Neanderthals, don’t appear to have been more than fourth order intentional, it seems unlikely that they would have had religions of very great complexity. Quite what that means remains to be determined, but the limited archaeological evidence for an active religious life among archaics suggests that, at best, it wasn’t very sophisticated. Human Evolution: Our Brains and Behavior by Robin Dunbar, pp. 285-286

A hyperactive Theory of Mind has long been suspected as playing a role in religious belief, as well as in schizophrenia, in which intentionality has run amok, leading to paranoia and hallucinations (objects talking to you, etc.):

One of the most basic insights of the cognitive science of religion is that religions the world over and throughout human history have reliably evolved so as to involve representations that engage humans’ mental machinery for dealing with the social world. After all, such matters enthrall human minds. The gods and, even more fundamentally, the ancestors are social agents too! On the basis of knowing that the gods are social actors, religious participants know straightaway that they have beliefs, intentions, feelings, preferences, loyalties, motivations, and all of the other states of mind that we recognize in ourselves and others.

What this means is, first, that religious participants are instantly entitled to all of the inferences about social relations, which come as defaults with the development of theory of mind, and, second, that even the most naïve participants can reason about them effortlessly. Such knowledge need not be taught. We deploy the same folk psychology that we utilize in human commerce to understand, explain, and predict the gods’ states of mind and behaviors.

How Religions Captivate Human Minds (Psychology Today)

What Religion is Really All About (Psychology Today)

Most potently for our discussion of Julian Jaynes’s theories is the fact that fMRI scans have shown that auditory hallucinations—of the type the Jaynes described as the basis of ancient belief in gods—activate brain regions associated with Theory of Mind. Here’s psychologist Charles Fernyhough:

…When my colleagues and I scanned people’s brains while they were doing dialogic inner speech, we found activation in the left inferior frontal gyrus, a region typically implicated in inner speech. But we also found right hemisphere activation close to a region known as the temporoparietal junction (TPJ)…that’s an area that is associated with thinking about other people’ minds, and it wasn’t activated when people were thinking monologically…Two established networks are harnessed for the purpose of responding to the mind’s responses in an interaction that is neatly cost-effective in terms of processing resources. Instead of speaking endlessly without expectation of an answer, the brain’s work blooms into dialogue… The Voices Within by Charles Fernyhough; pp. 107-108 (emphasis mine)

Theory of mind is also involved with the brain’s Default mode network (DMN), a pattern of neural activity that takes place during mind-wandering, and seems to be largely responsible for the creation of the “unitary self.” It’s quite likely that the perception of the inner-voice as belonging to another being with it’s own personality traits, as Jaynes described, activates our inbuilt ToM module, as do feelings of an “invisible presence” also reported by non-clinical voice hearers. This is from Michael Pollan’s book on psychedelic research:

The default mode network stands in a kind of seesaw relationship with the attentional networks that wake up whenever the outside world demands our attention; when one is active, the other goes quiet, and vice versa. But as any person can tell you, quite a lot happens in the mind when nothing much is going on outside us. (In fact, the DMN consumes a disproportionate share of the brain’s energy.) Working at a remove from our sensory processing of the outside world, the default mode is most active when we are engaged in higher-level “metacognitive” processes such as self-reflection, mental time travel, mental constructions (such as the self or ego), moral reasoning, and “theory of mind”—the ability to attribute mental states to others, as when we try to imagine “what it is like” to be someone else. All these functions belong exclusively to humans, and specifically to adult humans, for the default mode network isn’t operational until late in a child’s development. How To Change Your Mind by Michael Pollan; pp. 301-303

Theory of mind is also critical for signed and spoken language. After all, I need to have some idea what’s going on in your mind in order to get my point across. The more I can insert myself into your worldview, the more effectively I can tailor my language to communicate with you, dear reader. Hopefully, I’ve done a decent job, (if you didn’t leave after the first paragraph, that is!) It also encourages language construction and development. In our earlier example, one would hope that the understanding of metaphor is sufficient that we implicitly understand that inosculation does not literally involve things kissing each other!

There is evidence to believe that the development of theory of mind is closely intertwined with language development in humans. One meta-analysis showed a moderate to strong correlation (r = 0.43) between performance on theory of mind and language tasks. One might argue that this relationship is due solely to the fact that both language and theory of mind seem to begin to develop substantially around the same time in children (between ages 2–5). However, many other abilities develop during this same time period as well, and do not produce such high correlations with one another nor with theory of mind. There must be something else going on to explain the relationship between theory of mind and language.

Pragmatic theories of communication assume that infants must possess an understanding of beliefs and mental states of others to infer the communicative content that proficient language users intend to convey. Since a verbal utterance is often underdetermined, and therefore, it can have different meanings depending on the actual context. Theory of mind abilities can play a crucial role in understanding the communicative and informative intentions of others and inferring the meaning of words. Some empirical results suggest that even 13-month-old infants have an early capacity for communicative mind-reading that enables them to infer what relevant information is transferred between communicative partners, which implies that human language relies at least partially on theory of mind skills….

Theory of Mind (Wikipedia)

Irony, metaphor, humor, and sarcasm are all examples of how language and theory of mind are related. Irony involves a knowing contrast between what is said and what is meant, meaning that you need to be able to infer what another person was thinking. “Irony depends on theory of mind, the secure knowledge that the listener understands one’s true intent. It is perhaps most commonly used among friends, who share common attitudes and threads of thought; indeed it has been estimated that irony is used in some 8 percent of conversational exchanges between friends.” (pp. 159-160) Sarcasm also relies on understanding the difference between what someone said and what they meant. I’m sure you’ve experienced an instance when someone writes some over-the-top comment on an online forum intended to sarcastically parody a spurious point of view, and some reader takes it at face value and loses their shit. It might be because we can’t hear the tone of voice or see the body language of the other person, but I suspect it also has something to do with the high percentage of high-spectrum individuals who frequent such message boards.

Metaphor, too relies on a non-literal understanding of language. If the captain calls for “all hands on deck,” it is understood that he wants more than just our hands, and that we aren’t supposed to place our hands down on the deck. If it’s “raining cats and dogs,” most of us know that animals are not falling out of the sky. And if I advise you to “watch your head,” you know to look out for low obstructions and not have an out-of-body experience. Which reminds me, humor also relies on ToM.

Theory of mind allows for normal individuals to use language in a loose way that tends not to be understood by those with autism. Most of us, if asked the question “Would you mind telling me the time?” would probably answer with the time, but an autistic individual would be more inclined to give a literal answer, which might be something like “No, I don’t mind.” Or if you ask someone whether she can reach a certain book, you might expect her to reach for the book and hand it to you, but an autistic person might simply respond yes or no. This reminds me that I once made the mistake of asking a philosopher, “Is it raining or snowing outside?”–wanting to know whether I should grab an umbrella or a warm coat. He said, “Yes.” Theory of mind allows is to use our language flexible and loosely precisely because we share unspoken thoughts, which serve to clarify or amplify the actual spoken message. pp. 160-161

If you do happen to be autistic, and all the stuff I just said goes over your head, don’t fret. I have enough theory of mind to sympathize with your plight. Although, if you are, you might more easily get this old programmer joke:

A programmer is at work when his wife calls and asks him to go to the store. She says she needs a gallon of milk, and if they have fresh eggs, buy a dozen. He comes home with 12 gallons of milk.

The relationship between creativity, mechanical aptitude, genius, and mental illness is complex and poorly understood, but has been a source of fascination for centuries. Often times creative people were thought to be “possessed” by something outside of their own normal consciousness or abilities:

Recent evidence suggests that a particular polymorphism on a gene known to be related to the risk of psychosis is also related to creativity in people with high intellectual achievement.

The tendency to schizophrenia or bipolar disorder may underlie creativity in the arts, as exemplified by musicans such as Bela Bartok, Ludwig van Beethoven, Maurice Ravel, or Peter Warlock, artists such as Amedeo Clemente Modigliani, Maurice Utrillo, or Vincent van Gogh, and writers such as Jack Kerouac, D. H. Lawrence, Eugene O’Neill, or Marcel Proust. The esteemed mathematician John Forbes Nash, subject of the Hollywood movie A Beautiful Mind, is another example. The late David Horrobin went so far as to argue that people with schizophrenia were regarded as the visionaries who shaped human destiny itself, and it was only with the Industrial Revolution, and a change m diet, that schizophrenics were seen as mentally ill. p. 143

Horrobin’s speculations are indeed fascinating, and only briefly alluded to in the text above:

Horrobin…argues that the changes which propelled humanity to its current global ascendancy were the same as those which have left us vulnerable to mental disease.

‘We became human because of small genetic changes in the chemistry of the fat in our skulls,’ he says. ‘These changes injected into our ancestors both the seeds of the illness of schizophrenia and the extraordinary minds which made us human.’

Horrobin’s theory also provides support for observations that have linked the most intelligent, imaginative members of our species with mental disease, in particular schizophrenia – an association supported by studies in Iceland, Finland, New York and London. These show that ‘families with schizophrenic members seem to have a greater variety of skills and abilities, and a greater likelihood of producing high achievers,’ he states. As examples, Horrobin points out that Einstein had a son who was schizophrenic, as was James Joyce’s daughter and Carl Jung’s mother.

In addition, Horrobin points to a long list of geniuses whose personalities and temperaments have be-trayed schizoid tendencies or signs of mental instability. These include Schumann, Strindberg, Poe, Kafka, Wittgenstein and Newton. Controversially, Horrobin also includes individuals such as Darwin and Faraday, generally thought to have displayed mental stability.

Nevertheless, psychologists agree that it is possible to make a link between mental illness and creativity. ‘Great minds are marked by their ability to make connections between unexpected events or trends,’ said Professor Til Wykes, of the Institute of Psychiatry, London. ‘By the same token, those suffering from mental illness often make unexpected or inappropriate connections between day-to-day events.’

According to Horrobin, schizophrenia and human genius began to manifest themselves as a result of evolutionary pressures that triggered genetic changes in our brain cells, allowing us to make unexpected links with different events, an ability that lifted our species to a new intellectual plane. Early manifestations of this creative change include the 30,000-year-old cave paintings found in France and Spain…

Schizophrenia ‘helped the ascent of man’ (The Guardian)

Writers May Be More Likely to Have Schizophrenia (PsychCentral)

The link between mental illness and diet is intriguing. For example, the popular ketogenic diet was originally developed not to lose weight, but to treat epilepsy! And, remarkably, a recent study has show that a ketogenic diet has caused remission of long-standing schizophrenia in certain patients. Recall that voice-hearing is a key symptom of schizophrenia (as well as some types of epilepsy). Was a change in diet partially responsible for what Jaynes referred to as bicameralism?

The medical version of the ketogenic diet is a high-fat, low-carbohydrate, moderate-protein diet proven to work for epilepsy. …While referred to as a “diet,” make no mistake: this is a powerful medical intervention. Studies show that over 50 percent of children with epilepsy who do not respond to medications experience significant reductions in the frequency and severity of their seizures, with some becoming completely seizure-free.

Using epilepsy treatments in psychiatry is nothing new. Anticonvulsant medications are often used to treat psychiatric disorders. Depakote, Lamictal, Tegretol, Neurontin, Topamax, and all of the benzodiazepines (medications like Valium and Ativan, commonly prescribed for anxiety) are all examples of anticonvulsant medications routinely prescribed in the treatment of psychiatric disorders. Therefore, it’s not unreasonable to think that a proven anticonvulsant dietary intervention might also help some people with psychiatric symptoms.

Interestingly, the effects of this diet on the brain have been studied for decades because neurologists have been trying to figure out how it works in epilepsy. This diet is known to produce ketones which are used as a fuel source in place of glucose. This may help to provide fuel to insulin resistant brain cells. This diet is also known to affect a number of neurotransmitters and ion channels in the brain, improve metabolism, and decrease inflammation. So there is existing science to support why this diet might help schizophrenia.

Chronic Schizophrenia Put Into Remission Without Medication (Psychology Today)

4. Kinship

The Sierpinski triangle provides a good model for human social organization

Although not discussed by Corballis, kinship structures are also inherently recursive. Given that kinship structures form the primordial organizational structure for humans, this is another important feature of human cognition that appears to derive from our recursive abilities. For a description of this, we’ll turn once again to Robin Dunbar’s book on Human Evolution. Dunbar (of Dunbar’s number fame) makes the case that the ability to supply names of kin members may be the very basis for spoken language itself!

There is one important aspect of language that some have argued constitutes the origin of language itself – the naming of kin.

There is no particular reason to assume that ability to name kin relationships was in any way ancestral, although it may well be the case that naming individuals appeared very early. One the other hand, labeling kinship categories (brother, sister, grandfather, aunt, cousin) is quite sophisticated: it requires us to make generalizations and create linguistic categories. And it probably requires us to be able to handle embeddedness, since kinship pedigrees are naturally embedded structures.

Kinship labels allow is to sum in a single word the exact relationship between two individuals. The consensus among anthropologists is that there are only about six major types of kinship naming systems – usually referred to as Hawaiian, Eskimo, Sudanese, Crow, Omaha and Iroquois after the eponymous tribes that have these different kinship naming systems. They differ mainly in terms of whether they distinguish parallel from cross cousins and whether descent is reconed unilaterally or bilaterally.

The reasons why these naming systems differ have yet to be explained satisfactorally. Nonetheless, given that one of their important functions is to specify who can marry whom, it is likley that they reflect local variations in mating and inheritance patterns. The Crow and Omaha kinship naming systems, for example, are mirror images of each and seem to be a consequence of differing levels of paternity certainty (as a result, one society is patrilineal, the other matrilineal). Some of these may be accidents of cultural history, while others may be due to the exigencies of the local ecology. Kinship naming systems are especially important, for example, when there are monpolizable resources like land that can be passed on from one generation to the next and it becomes crucial to know just who is entitled, by descent, to inherit. Human Evolution: Our Brains and Behavior, by Robin Dunbar; pp. 272-273

Systems of kinship appear to be largely based around the means of subsistance and rules of inheritance. Herders, for example, tend to be patriarchal, and hence patrilineal. The same goes for agrarian societies where inheritance of arable land is important. Horticultural societies, by contrast, are often more matrilineal, reflecting women’s important role in food production. Hunter-gatherers, where passing down property is rare, are often bilateral. These are, of course, just rules of thumb. Sometimes tribes are divided into two groups, which anthropolgists call moieties (from the French for “half”), which are designed to prevent inbreeding (brides are exchanged exclusively across moieties).

Anthropologists have sometimes claimed that biology cannot explain human kinship naming systems because many societies classify biologically unrelated individuals as kin. This is a specious argument for two separate reasons. One is that the claim is based on a naive understanding of what biological kinship is all about.

This is well illustrated by how we treat in-laws. In English, we classify in-laws (who are biologically unrelated to us) using the same kin terms that we use for real biological relatives (father-in-law, sister-in-law, etc.). However…we actually treat them, in emotional terms, as though they were real biological kin, and we do so for a very good biological reason: they share with us a common genetic interest in the next generation.

We tend to think of genetic relatedness as reflecting past history (i.e. how two people are related in a pedigree that plots descent from some common ancestor back in time). But in fact, biologically speaking, this isn’t really the issue, although it is a convenient approximation for deciding who is related to whom. In an exceptionally insightful but rarely appreciated book (mainly because it is very heavy on maths), Austen Hughes showed that the real issue in kinship is not relatedness back in time but relatedness to future offspring. In-laws have just as much stake in the offspring of a marriage as any other relative, and hence should be treated as though they are biological relatives. Hughes showed that this more sophisticated interpretation of biological relatedness readily explains a large number of ethnographic examples of kinship naming and co-residence that anthropologists have viewed as biologically inexplicable. Human Evolution: Our Brains and Behavior, by Robin Dunbar; pp. 273-277

As a sort of proof of this, many of the algorithms that have been developed to determine genetic relatedness between individuals (whether they carry the same genes) are recursive! It’s also notable that the Pirahã, whose language allegedly does not use recursion, also do not have extended kinship groups (or ancestor worship or higher-order gods for that matter. In fact, they are said to live entirely in the present, meaning no mental time travel either).

Piraha Indians, Recursion, Phonemic Inventory Size and the Evolutionary Significance of Simplicity (Anthropogenesis)

The second point is that in traditional small-scale societies everyone in the community is kin, whether by descent or by marriage; those fewwho aren’t soon become so by marrying someone or by being given some appropriate status as fictive or adoptive kin. The fact that some people are misclassified as kin or a few strangers are granted fictional kinship status is not evidence that kinship naming systems do not follow biological principles: a handful of exceptions won’t negate the underlying evolutionary processes associated with biological kinship, not least because everything in biology is statistical rather than absolute. One would need to show that a significant proportion of naming categories cross meaningful biological boundaries, but in fact they never do. Adopted children can come to see their adoptive parents as their real parents, but adoption itself is quite rare; moreover, when it does occur in traditional societies it typically involves adoption by relatives (as anthropological studies have demonstrated). A real sense of bonding usually happens only when the child is very young (and even then the effect is much stronger for the child than for the parents – who, after all, know the child is not theirs).

Given that kinship naming systems seem to broadly follow biological categories of relatedness, a natural assumption is that they arise from biological kin selection theory… It seems we have a gut response to help relatives preferentially, presumably as a consequence of kin selection…Some of the more distant categories of kin (second and third cousins, and cousins once removed, as well as great-grandparents and great-great-grandparents) attract almost as strong a response from us as close kin. Yet these distant relationships are purely linguistic categories that someone has labelled for us (‘Jack is your second cousin -you share a great-grandmother’). The moment you are told that somebody is related to you, albeit distantly, it seems to place them in a very different category from mere friends, even if you have never met them before…You only need to know one thing about kin- that they are related to us ( and maybe exactly how closely they are related) whereas with a friend we have to track back through all the past interactions to decide how they actually behaved on different occasions. Because less processing has to be done, decisions about kin should be done faster and at less cognitive cost than decisions about unrelated individuals. This would imply that, psychologically, kinship is an implicit process (i.e. it is automated), whereas friendship is an explicit process (we have to think about it)…

It may be no coincidence that 150 individuals is almost exactly the number of living descendants (i.e. members of the three currently living generations: grandparents, parents and children) of a single ancestral pair two generations back (i.e. the great-great-grandparents) in a society with exogamy (mates of one sex come from outside the community, while the other sex remains for life in the community into which it was born). This is about as far back as anyone in the community can have personal knowledge about who is whose offspring so as to be able to vouch for how everyone is related to each other. It is striking that no kinship naming system identifies kin beyond this extended pedigree with its natural boundary at the community of 150 individuals. It seems as though our kinship naming systems may be explicitly designed to keep track of and maintain knowledge about the members of natural human communities. Human Evolution: Our Brains and Behavior, by Robin Dunbar; pp. 273-277

Corballis concludes:

Recursion, then, is not the exclusive preserve of social interaction. Our mechanical world is as recursively complex as is the social world. There are wheels within wheels, engines within engines, computers within computers. Cities are containers built of containers within containers, going right down, I suppose, to handbags and pockets within our clothing. Recursive routines are a commonplace in computer programming, and it is mathematics that gives us the clearest idea of what recursion is all about. But recursion may well have stemmed from runaway theory of mind, and been later released into the mechanical world. p. 144

In the final section of The Recursive Mind, Corballis takes a quick tour through human evolution to see when these abilities may have first emerged. That’s what we’ll take a look at in our last installment of this series.

The Recursive Mind (Review) – 3

Part 1

Part 2

2. Mental Time Travel

The word “remembering” is used loosely and imprecisely. There are actually multiple different types of memory; for example, episodic memory and semantic memory.

Episodic memory: The memory of actual events located in time and space, i.e “reminiscing.”

Semantic memory: The storehouse of knowledge that we possess, but which does not involve any kind of conscious recollection.

Semantic memory refers to general world knowledge that we have accumulated throughout our lives. This general knowledge (facts, ideas, meaning and concepts) is intertwined in experience and dependent on culture.

Semantic memory is distinct from episodic memory, which is our memory of experiences and specific events that occur during our lives, from which we can recreate at any given point. For instance, semantic memory might contain information about what a cat is, whereas episodic memory might contain a specific memory of petting a particular cat.

We can learn about new concepts by applying our knowledge learned from things in the past. The counterpart to declarative or explicit memory is nondeclarative memory or implicit memory.

Semantic memory (Wikipedia)

Episodic memory is essential for creating of the narrative self. Episodic memory takes various forms, for example:

Specific events: When you first set foot in the ocean.

General events: What it feels like stepping into the ocean in general. This is a memory of what a personal event is generally like. It might be based on the memories of having stepped in the ocean, many times during the years.

Flashbulb memories: Flashbulb memories are critical autobiographical memories about a major event.

Episodic Memory (Wikipedia)

For example, if you are taking a test for school, you are probably not reminiscing about the study session you had the previous evening, or where you need to be the next class period. You are probably not thinking about your childhood, or about the fabulous career prospects that are sure to result from passing this test. Those episodic memories—inserting yourself into past or future scenarios—would probably be a hindrance from the test you are presently trying to complete. Semantic memory would be what you are drawing upon to answer the questions (hopefully correctly).


It is often difficult to distinguish between one and the other. Autobiographical memories are often combinations of the two—lived experience combined with autobiographical stories and family folklore. Sometimes, we can even convince ourselves that things that didn’t happen actually did (false memories). Our autobiographical sense of self is determined by this process.

Endel Tulving has described remembering as autonoetic, or self-knowing, in that one has projected one’s self into the past to re-experience some earlier episode. Simply knowing something, like the boiling point of water, is noetic, and implies no shift of consciousness. Autoneotic awareness, then, is is recursive, in that one can insert previous personal experience into present awareness. This is analogous to the embedding of phrases within phrases, or sentences within sentences.

Deeper levels of embedding are also possible, as when I remember yesterday that I had remember yesterday that I had remembered an event that occurred at some earlier time. Chunks of episodic awareness can thus be inserted into each other in recursive fashion. Having coffee at a conference recently, I was reminded of an earlier conference where I managed to spill coffee on a distinguished philosopher. This is memory of a memory of an event. I shall suggest later that this kind of embedding may have set the state for the recursive structure of language itself (p. 85) [Coincidentally, as I was typing this paragraph, I spilled coffee on the book. Perhaps you will spill coffee on your keyboard while reading this. – CH]

Corballis mentions that case of English musician Clive Wearing, whose hippocampus was damaged leading to anteriograde and retrograde amnesia. At the other end of the spectrum is the Russian Solomon Shereshevsky.

The Abyss (Oliver Sacks, The New Yorker)

Long-term memory can further be subdivided into implicit memory and explicit (or declarative) memory.

“Implicit memories are elicited by the immediate environment, and do not involve consciousness or volition.” (p. 98) … Implicit memory…enables us to learn without any awareness that we are doing so. It is presumably more primitive in an evolutionary sense than is explicit memory, which is made up of semantic and episodic memory. Explicit memory is sometimes called declarative memory because it is the kind of memory we can talk about or declare.

Implicit memory does not depend on the hippocampus, so amnesia resulting from hippocampal damage does not entirely prevent adaptation to new environments or condition, but such adaptation does not enter consciousness. p. 88 (emphasis mine)
Explicit memories, by contrast, “provide yet more adaptive flexibility, because it does not depend on immediate evocation from the environment” p. 98 (emphasis mine)

The textbook case of implicit memory is riding a bicycle. You don’t think about, or ponder how to do it, you just do it. No amount of intellectual thought and pondering and thinking through your options will help you to swim or ride a bike or play the piano. When a line drive is hit to the shortstop, implicit memory, not explicit memory catches the ball (although the catch might provide a nice explicit memory for the shortstop later on). A daydreaming shortstop wold miss the ball completely.

Words are stored in semantic memory, and only rarely or transiently in episodic memory. I have very little memory of the occasions on which I learned the meanings of the some 50,000 words that I know–although I can remember occasionally looking up obscure words that I didn’t know, or that had escaped my semantic memory. The grammatical rules by which we string words together may be regarded as implicit rather than explicit memory, as automatic, perhaps, as riding a bicycle. Indeed, so automatic are the rules of grammar that linguists have still not been able to elaborate all of them explicitly. p. 126 (emphasis mine)

Operant conditioning (also called signal learning, solution learning, or instrumental learning) is another type of learning that does not require conscious, deliberative thought. It is a simple stimulus and response. You touch the stove, and you know the stove is hot. There was no thinking involved when Pavlov’s dogs salivated at the sound of a bell, for example. In a very unethical experiment, the behaviorist John B. Watson took a nine-month old orphan and conditioned him to be afraid of rats, rabbits, moneys, dogs and masks. He did this by making a loud, sharp noise (banging a metal bar with a hammer), which the child was afraid of, whenever the child was presented with those things. By associating the sound with the stimulus, he was able to induce a fear of those items. But there was no volition; no conscious thought was involved in this process. It works the same way on dogs, rabbits, humans or fruit flies. Behvariorism tells us next to nothing about human consciousness, or what makes us different.

These types of conditioning may be said to fall under the category of implicit memory. As we have seen, implicit memory may also include the learning of skills and even mental strategies to cope with environmental challenges. Implicit memories are elicited by the immediate environment, and do not involve consciousness or volition. Of course, one may remember the experience of learning to ride a bicycle, but that is distinct from the learning itself…These are episodic memories, independent of the process of actually learning (more or less) to ride the bike. p. 98 (emphasis mine, italics in original)

This important distinction is what is behind Jaynes’s declaration that learning and remembering do not require consciousness. Implicit memory and operant conditioning do not require the kind of deliberative self-consciousness or “analog I” that Jaynes described. Even explicit memory—the ability to recall facts and details, for example—does not, strictly speaking, require deliberative self-consciousness. Clive Wearing, referred to above, could still remember how to play the piano, despite living in an “eternal present.” Thus, it is entirely possible that things such as ruminative self-consciousness emerged quite late in human history. Jaynes himself described why consciousness (as distinct from simply being functional and awake) is not required for learning, and can even be detrimental to it.

In more everyday situations, the same simple associative learning can be shown to go on without any consciousness that it has occurred. If a distinct kind of music is played while you are eating a particularly delicious lunch, the next time you hear the music you will like its sounds slightly more and even have a little more saliva in your mouth. The music has become a signal for pleasure which mixes with your judgement. And the same is true for paintings. Subjects who have gone through this kind of test in the laboratory, when asked why they liked the music or paintings better after lunch, could not say. They were not conscious they had learned anything. But the really interesting thing here is that if you know about the phenomenon beforehand and are conscious of the contingency between food and the music or painting, the learning does not occur. Again, consciousness reduces our learning abilities of this type, let alone not being necessary for them…

The learning of complex skills is no different in this respect. Typewriting has been extensively studied, it generally being agreed in the worlds of one experimenter “that all adaptations and short cuts in methods were unconsciously made, that is, fallen into by the learners quite unintentionally.” The learners suddenly noticed that they were doing certain parts of the work in a new and better way.

Another simple experiment can demonstrate this. Ask someone to sit opposite you and to say words, as many words as he can think of, pausing two or three seconds after each of them for you to write them down. If after every plural noun (or adjective, or abstract word, whatever you choose) you say “good” or “right” as you write it down, or simply “mmm-hmm” or smile, or repeat the plural word pleasantly, the frequency of plural nouns (or whatever) will increase significantly as he goes on saying the words. The important thing here is that the subject is not aware that he is learning anything at all. He is not conscious that he is trying to find a way to make you increase your encouraging remarks, or even of his solution to that problem. Every day, in all our conversations, we are constantly training and being trained by each other in this manner, and yet we are never conscious of it. OoCitBotBM; pp. 33-35

But we not only use our memory to recall past experiences, we also think about future events as well, and this is based on the same ability to mentally time travel. It may seem paradoxical to think of memory as having anything to do with events that haven’t happened yet, but brain scans show that similar areas of the brain are activated when recalling past events and envisioning future ones—particularly the prefrontal cortex, but also parts of the medial temporal lobe. There is slightly more activity in imagining future events, probably due to the increased creativity required of this activity.


In this ability to mentally time travel we seem to be unique among animals, at least at to the extent that we do it and our abilities to do so:

So far, there is little convincing evidence that animals other than humans are capable of mental time travel—or if they are, their mental excursions into past or future have little of the extraordinary flexibility and broad provenance that we see in our own imaginative journeys. The limited evidence from nonhuman animals typically comes from behaviors that are fundamentally instinctive, such as food caching or mating, whereas in humans mental time travel seems to cover all aspects of our complex lives. p. 112

Animals Are ‘Stuck In Time’ With Little Idea Of Past Or Future, Study Suggests (Science Daily)

However, see: Mental time-travel in birds (Science Daily)

We are always imagining and anticipating, from thinking about events later the same day, or perhaps years from now. Even in a conversation, we are often planning what we are about to say, rather than focusing on the conversation itself. That is, we are often completely absent in the present moment, which is something that techniques like mindfulness meditation are designed to mitigate. We can even imagine events after we are dead, and it has been argued that this knowledge lays behind many unique human behaviors such as the notion of an afterlife and the idea of religion more generally. The way psychologists study this is to use implicit memory (as described above) to remind people of their own mortality. This is done through a technique called priming:

Priming is remarkably resilient. In one study, for example, fragments of pictures were used to prime recognition of whole pictures of objects. When the same fragments were shown 17 years later to people who had taken part in the original experiment, they were able to write the name of the object associated with each fragment much more accurately than a control group who had not previously seen the fragments. p. 88

When primed with notions of death and their own mortality, it has been shown that people in general are more authoritarian, more aggressive, more hostile to out-groups and simultaneously more loyal to in-groups. Here’s psychologist Sheldon Solomon describing the effect in a TED Talk:

“Studies show that when people are reminded of their mortality, for example, by completing a death anxiety questionnaire, or being interviewed in front of a funeral parlor, or even exposed to the word ‘death’ that’s flashed on a computer screen so fast—28 milliseconds—that you don’t know if you’ve even seen anything—When people are reminded of their own death, Christians, for example, become more derogatory towards Jews, and Jews become more hostile towards Muslims. Germans sit further away from Turkish people. Americans who are reminded of death become more physically aggressive to other Americans who don’t share their political beliefs. Iranians reminded of death are more supportive of suicide bombing, and they’re more willing to consider becoming martyrs themselves. Americans reminded of their mortality become more enthusiastic about preemptive nuclear, chemical and biological attacks against countries who pose no direct threat to us. So man’s inhumanity to man—our hostility and disdain toward people who are different—results then, I would argue, at least in part from our inability to tolerate others who do not share the beliefs that we rely on to shield ourselves from mortal terror.”

Humanity at the Crossroads (YouTube)

One important aspect of episodic memory is that it locates events in time. Although we are often not clear precisely when remembered events happened, we usually have at least a rough idea, and this is sufficient to give rise to the general understanding of time itself. It appears that locating events in time and in space are related.

Episodic memory allows us to travel back in time, and consciously relive previous experiences. Thomas Suddendorf called this mental time travel, and made the important suggestion that mental time travel allows us to imagine future events as well as remember past ones. It also adds to the recursive possibilities; I might remember, for example, that yesterday I had plans to go to the beach tomorrow.The true significance of episodic memory, then is that it provides a vocabulary from which to construct future events, and so fine-tune our lives.

What has been termed episodic future thinking, or the ability to imagine future events, emerges in children at around the same time as episodic memory itself, between the ages of three and four. Patients with amnesia are as unable to answer questions about past events as they are to say what might happen in the future… p. 100

Once again, the usefulness of this will be determined by the social environment. I will argue later that this ability to mentally time travel, as with the ability to “read minds” (which we’ll talk about next) became more and more adaptive over time as societies became more complex. For example, it would play little to no role among immediate return hunter gatherers (such as the Pirahã), who live mostly in the present and do not have large surpluses. Among delayed return hunter gatherers and horticulturalists, however, it would play a far larger role.

When we get to complex foragers and beyond, however, the ability to plan for the future becomes almost like a super-power. And here, we see a connection I will make between recursion and the Feasting Theory we’ve previously discussed. Simply put, an enhanced sense of future states allows one to more effectively ensnare people in webs of debt and obligation, which can then be leveraged to gain wealth and social advantage. I will argue that this is what allowed the primordial inequalities to form in various societies which could produce surpluses of wealth. It also demonstrates the evolutionary advantages of recursive thinking.

Corballis then ties together language and mental time travel. He posits that the recursive nature of language evolved specifically to allow us to share past and future experiences. It allows us to narratize our lives, and to tell that story to others, and perhaps more importantly, to ourselves.

Language allows us to construct things that don’t exist—shared fictions. It allows us to tell fictional stories of both the past and the future.

Episodic memories, along with combinatorial rules, allow us not only to create and communicate possible episodes in the future, but also to create fictional episodes. As a species, we are unique in telling stories. Indeed the dividing line between memory and fiction is blurred; every fictional story contains elements of memory, and memories contain elements of fiction…Stories are adaptive because they allow us to go beyond personal experience to what might have been, or to what might be in the future. They provide a way of stretching and sharing experiences so that we are better adapted to possible futures. Moreover, stories tend to become institutionalized, ensuring that shared information extends through large sections of the community, creating conformity and social cohesion. p. 124

The main argument … is that grammatical language evolved to enable us to communicate about events that do not take place in the here and now. We talk about episodes in the past, imagined or planned episodes in the future, or indeed purely imaginary episodes in the form of stories. Stories may extend beyond individual episodes, and involve multiple episodes that may switch back and forth in time. The unique properties of grammar, then, may have originated in the uniqueness of human mental time travel…Thus, although language may have evolved, initially at least, for the communication of episodic information, it is itself a robust system embedded in the more secure vaults of semantic and implicit memory. It has taken over large areas of our memory systems, and indeed our brains. p. 126


The mental faculties that allow us to locate, sort and retrieve events in time, are apparently use the same ones that we use to locate things in space. Languages have verb tenses that describe when things took place (although a few languages lack this ability). The ability to range at will over past, present and future gave rise to stories, which are often the glue that holds societies together, such as origin stories or tales of distant ancestors. Is the image above truly about moving forward in space, or is it about something else? What does it mean to say things like we “move forward” after a tragedy?

Different sets of grid cells form different grids: grids with larger or smaller hexagons, grids oriented in other directions, grids offset from one another. Together, the grid cells map every spatial position in an environment, and any particular location is represented by a unique combination of grid cells’ firing patterns. The single point where various grids overlap tells the brain where the body must be…Since the grid network is based on relative relations, it could, at least in theory, represent not only a lot of information but a lot of different types of information, too. “What the grid cell captures is the dynamic instantiation of the most stable solution of physics,” said György Buzsáki, a neuroscientist at New York University’s School of Medicine: “the hexagon.” Perhaps nature arrived at just such a solution to enable the brain to represent, using grid cells, any structured relationship, from maps of word meanings to maps of future plans.

The Brain Maps Out Ideas and Memories Like Spaces (Quanta)

It is likely that a dog, or even a bonobo, does not tell itself an ongoing “story” of it’s life. It simply “is”. If we accept narratization as an important feature of introspective self-consciousness, then we must accept the ability to tell ourselves these internal stories is key to the creation of that concept. But when did we acquire this ability? And is it universal? Clearly, it has something to do with the acquisition of language. And if we accept a late origin of language, it certainly cannot have arisen more than 70-50,000 years before present. To conclude, here is an excerpt from a paper Corballis wrote for the Royal Society:

the evolution of language itself is intimately connected with the evolution of mental time travel. Language is exquisitely designed to express ‘who did what to whom, what is true of what, where, when and why’…and these are precisely the qualities needed to recount episodic memories. The same applies to the expression of future events—who will do what to whom, or what will happen to what, where, when and why, and what are we going to do about it…To a large extent, then, the stuff of mental time travel is also the stuff of language.

Language allows personal episodes and plans to be shared, enhancing the ability to plan and construct viable futures. To do so, though, requires ways of representing the elements of episodes: people; objects; actions; qualities; times of occurrence; and so forth…The recounting of mental time travel places a considerable and, perhaps, uniquely human burden on communication, since there must be ways of referring to different points in time—past, present and future—and to locations other than that of the present. Different cultures have solved these problems in different ways. Many languages use tense as a way of modifying verbs to indicate the time of an episode, and to make other temporal distinctions, such as that between continuous action and completed action. Some languages, such as Chinese, have no tenses, but indicate time through other means, such as adverbs or aspect markers. The language spoken by the Pirahã, a tribe of some 200 people in Brazil, has only a very primitive way of talking about relative time, in the form of two tense-like morphemes, which seem to indicate simply whether an event is in the present or not, and Pirahã are said to live largely in the present.

Reference to space may have a basis in hippocampal function; as noted earlier, current theories suggest that the hippocampus provides the mechanism for the retrieval of memories based on spatial cues. It has also been suggested that, in humans, the hippocampus may encompass temporal coding, perhaps through analogy with space; thus, most prepositions referring to time are borrowed from those referring to space. In English, for example, words such as at, about, around, between, among, along, across, opposite, against, from, to and through are fundamentally spatial, but are also employed to refer to time, although a few, such as since or until, apply only to the time dimension. It has been suggested that the hippocampus may have undergone modification in human evolution, such that the right hippocampus is responsible for the retrieval of spatial information, and the left for temporal (episodic or autobiographical) information. It remains unclear whether the left hippocampal specialization is a consequence of left hemispheric specialization for language, or of the incorporation of time into human consciousness of past and future, but either way it reinforces the link between language and mental time travel.

The most striking parallel between language and mental time travel has to do with generativity. We generate episodes from basic vocabularies of events, just as we generate sentences to describe them. It is the properties of generativity and recursiveness that, perhaps, most clearly single out language as a uniquely human capacity. The rules governing the generation of sentences about episodes must depend partly on the way in which the episodes themselves are constructed, but added rules are required by the constraints of the communication medium itself. Speech, for example, requires that the account of an event that is structured in space–time be linearized, or reduced to a temporal sequence of events. Sign languages allow more freedom to incorporate spatial as well as temporal structure, but still require conventions. For example, in American sign language, the time at which an event occurred is indicated spatially, with the continuum of past to future running from behind the body to the front of the body.

Of course, language is not wholly dependent on mental time travel. We can talk freely about semantic knowledge without reference to events in time… However, it is mental time travel that forced communication to incorporate the time dimension, and to deal with reference to elements of the world, and combinations of those elements, that are not immediately available to the senses. It is these factors, we suggest, that were in large part responsible for the development of grammars. Given the variety of ways in which grammars are constructed, such as the different ways in which time is marked in different languages, we suspect that grammar is not so much a product of some innately determined universal grammar as it is a product of culture and human ingenuity, constrained by brain structure.

Mental time travel and the shaping of the human mind (The Royal Society)

Next time, we’ll take a look at another unique recursive ability of the human mind: the ability to infer the thoughts and emotions of other people, a.k.a. the Theory of Mind.

The Recursive Mind (Review) – 2

Part 1

1. Language

We’ve already covered language a bit already. A good example of language recursion is given by children’s rhymes, such as This is the House That Jack Built:

It is a cumulative tale that does not tell the story of Jack’s house, or even of Jack who built the house, but instead shows how the house is indirectly linked to other things and people, and through this method tells the story of “The man all tattered and torn”, and the “Maiden all forlorn”, as well as other smaller events, showing how these are interlinked…(Wikipedia)

“The House That Jack Built” plays on the process of embedding in English noun phrases. The nursery rhyme is one sentence that continuously grows by embedding more and more relative clauses as postmodifiers in the noun phrase that ends the sentence…In theory, we could go on forever because language relies so heavily on embedding.

The Noun Phrase (Papyr.com)

In English, clauses can be embedded either in the center, or at the end:

In “The House That Jack Built” clauses are added to the right. This is called right-embedding. Much more psychologically taxing is so-called center-embedding, where clauses are inserted in the middle of clauses. We can cope with a single embedded clauses as in:

“The malt that the the rat ate lay in the house that Jack built.”

But it becomes progressively more difficult as we add further embedded clauses:

“The malt[that the rat (that the cat killed) ate] lay in the house that Jack built.”

Or worse:

“The malt [that the rat (that the cat {that the dog chased} killed) ate] lay in the house that Jack built.

I added brackets in the last two examples that may help you see the embeddings, but even so they’re increasingly difficult to unpack. Center-embedding is difficult because words to be linked are separated by the embedded clauses; in the last example above, it was the malt that lay in the house, but the words malt and lay are separated by twelve words. In holding the word malt in mind in order to hear what happened to it, one must also deal with separations between rat and ate and between cat and killed…Center embeddings are more common in written language than in spoken language, perhaps because when language is written you can keep it in front of you indefinitely while you try to figure out the meaning….The linguistic rules that underlie our language faculty can create utterances that are potentially, if not actually, unbounded in potential length and variety. These rules are as pure and beautiful as mathematics…

The Truth About Language pp. 13-14

Or a song you may have sung when you were a child: “There Was an Old Lady who Swallowed a Fly.”

The song tells the nonsensical story of an old woman who swallows increasingly large animals, each to catch the previously swallowed animal, but dies after swallowing a horse. The humour of the song stems from the absurdity that the woman is able to inexplicably and impossibly swallow animals of preposterous sizes without dying, suggesting that she is both superhuman and immortal; however, the addition of a horse is finally enough to kill her. Her inability to survive after swallowing the horse is an event that abruptly and unexpectedly applies real-world logic to the song, directly contradicting her formerly established logic-defying animal-swallowing capability. (Wikipedia)

The structure can be expressed this way:

cow [goat (dog {cat [bird (spider {fly})]})] – after which, she swallows the horse and expires. The resulting autopsy would no doubt unfold a chain of events resembling a Matryoshka doll (or a Turducken).

Or yet another chestnut from my childhood: “There’s a Hole in My Bucket,” which is less an example of recursion that a kind of strange loop:

The song describes a deadlock situation: Henry has a leaky bucket, and Liza tells him to repair it. To fix the leaky bucket, he needs straw. To cut the straw, he needs an axe. To sharpen the axe, he needs to wet the sharpening stone. To wet the stone, he needs water. But to fetch water, he needs the bucket, which has a hole in it. (Wikipedia)

Whether all human languages have a recursive structure by default, or are at least capable of it, is one of the most controversial topics in linguistics.

Bringing more data to language debate (MIT News)

The idea that language is not just based on external stimulus, but is in some way “hard-wired” into the human brain was first developed by Noam Chomsky. He argued that this meant that grammatical constructions were somehow based on the brain’s inner workings (i.e. how the brain formulates thoughts internally), and therefore all languages would exhibit similar underlying structures, something which he called the “Universal Grammar.”

Furthermore, he argued that language construction at its most fundamental level could be reduced to a single recursive operation he called Merge. This was part of his so-called “Minimalist Program” of language construction.

Merge is…when two syntactic objects are combined to form a new syntactic unit (a set).

Merge also has the property of recursion in that it may apply to its own output: the objects combined by Merge are either lexical items or sets that were themselves formed by Merge.

This recursive property of Merge has been claimed to be a fundamental characteristic that distinguishes language from other cognitive faculties. As Noam Chomsky (1999) puts it, Merge is “an indispensable operation of a recursive system … which takes two syntactic objects A and B and forms the new object G={A,B}” (Wikipedia)

The Merge applies to I-language, the thinking behind language, whereas the language spoken out loud is translated into what he calls E-Language (E for external). Corballis explains:

The Merge operation…strictly hold for what Chomsky calls I-language, which is the internal language of thought, and need not apply directly to E-language, which is external language as actually spoken or signed. In mapping I-language to E-language, various supplementary principles are needed. For instance…the merging of ‘Jane loves John’ with ‘Jane flies airplanes’ to get, ‘Jane, who flies airplanes, loves John’ requires extra rules to introduce the word who and delete one copy of the word Jane.

I-language will map onto different E-languages in different ways. Chomsky’s notion of unbounded Merge, recursively applied, is therefore essentially an idealization, inferred from the study of external languages, but is not in itself directly observable. pp. 23-24

It’s notable that whatever the other merits of the Merge, it does appear to be a good description of how language is extended via metaphor. I recently ran across a good example of this: the word inosculation, meaning to homogenize, make continuous or interjoin. It’s root is the verb “to kiss”, which itself is derived from the word for “mouth.” This word, like so many others, was created through recursion and metaphor.

From in- +‎ osculate, from Latin ōsculātus (“kiss”), from ōs + -culus (“little mouth”).

The sheer diversity of human languages that have been found and studied has put Chomsky’s Universal Grammar theory on the cooler. There does not seem to be any sort of universal grammar that we can find, nor a universal method of thought which underlies it. A few languages have been discovered that do not appear to use recursion, most famously the Pirahã language of the Amazon, but also the Iatmul language of New Guinea and some Australian languages spoken in West Arnhem land. For example, the phrase “They stood watching us fight” would be rendered as “They stood/they were watching us/we were fighting.” in Bininj Gun-Wok (p. 27)

Recursion and Human Thought: A Talk with Daniel Everett (Edge Magazine)

It continues to be debated whether animals have a capacity for recursive expression. A study on 2006 argued that starling calls exhibited a recursive quality, but this has been questioned. As I mentioned earlier, it is often difficult to tell if something which may appear recursive is actually generated recursively.

Starlings vs. Chomsky (Discover)

Corballis argues here (as he has in other books, which I will also refer to), that the human mental capacity for language evolved first via gesticulation (hand gestures), rather than verbal sounds (speech). Only much later, he argues, did communication switch from primarily hand gestures to speech. “I have argued…that the origins of language lie in manual gestures, and the most language-like behavior in nonhuman species is gestural.” (p. 161) Some reasons he gives for believing this are:

1.) We have had extensive control over our arms, hands and fingers (as demonstrated by tool use and manufacture, for example) for a millions of years, but the fine motor control over our lungs and vocal tract required to produce articulate speech is of far more recent vintage. It is also unique to our species—other apes don’t have the control over the lungs or mouth required for speech. In fact, the unique control that humans possess over their breathing leads Corballis to speculate that distant human ancestors must have spent considerable time diving in water, which requires extensive breath control. Human babies, for example, instinctively know to hold their breath in water in a way that other apes—including our closest relatives—cannot. This leads him to endorse an updated version of the Aquatic Ape theory called the Littoral hypothesis, or the Beachcomber theory.

In an extensive discussion of the aquatic ape hypothesis and the various controversies surrounding it, Mark Verhaegen suggests that our apelike ancestors led when he calls an aquarborial life, on the borders between forest and swamp lands. There they fed on shellfish and other waterbourne foods, as well as on plants and animals in the neighboring forested area…In this view, the environment that first shaped our evolution as humans was not so much the savanna as the beach. During the Ice ages the sea levels dropped, opening up territory rich in shellfish but largely devoid of trees. Our early Pleistocene forebears dispersed along the coasts, and fossils have been discovered not only in Africa but as far away as Indonesia, Georgia, and even England. Stone tools were first developed not so much for cutting carcasses of game killed on land as for opening and manipulating shells. Bipedalism too was an adaptation not so much for walking and running as for swimming and shallow diving.

Verhaegen lists a number of features that seem to have emerged only in Pleistocene fossils, some of which are present in other diving species but not in pre-Pleistocene hominins. These include loss of fur; an external nose; a large head; and head, body, and legs all in a straight line. The upright stance may have helped individuals stand tall and spot shellfish in the shallow water. Later, in the Pleistocene, different Homo populations ventured inland along rivers and perhaps then evolved characteristics more suited to hunting land-based animals. The ability to run, for instance, seems to have evolved later in the Pleistocene. But Verhaegen suggest that, in fact, we are poorly adapted to a dry, savannalike environment and retain many littoral adaptations (that is, adaptations to coastal regions): “We have a water- and sodium-wasting cooling system of abundant sweat glands, totally unfit for a dry environment. Our maximal urine concentration is much too low for a savanna-dwelling mammal. We need much more water than other primates and have to drink more often than savanna inhabitants, yet we cannot drink large quantities at a time.

Part of the reason for our swollen brains may derive from a diet of shellfish and other fish accessible the shallow-water foraging [sic]. Seafood supplies docosahexaenoic acid (DHA), an omega 3 fatty acid, and some have suggested that it was this that drive the increase in brain size, reinforcing the emergence of language and social intelligence.

Michael A. Crawford and colleagues have long proposed that we still need to supplement our diets with DHA and other seafoods to maintain fitness. Echoing Crawford, Marcos Duarte issues a grim warning: “The sharp rise in brain disorders, which, in many developed countries, involves social costs exceeding those of heart disease and cancer combined, has been deemed the most worrying change in disease pattern in modern societies, calling for urgent consideration of seafood requirements to supply the omega 3 and DHA required for brain health.
The Truth About Language: What It Is and Where It Came From; pp. 95-97

2.) Chimpanzees appear to have little control over the types of sounds that they make. Vocalization in primates appears to be largely instinctual, and not under conscious control.

3.) Although apes such as chimpanzees, bonobos and gorillas cannot learn spoken language, they can be taught to communicate with humans using sign language. Apes have learned vocabularies of several thousand signed words, most notably Koko the gorilla, and the bonobo Kanzi.

Manual activity in primates is intentional and subject to learning, whereas vocalizations appear to be largely involuntary and fixed. In teaching great apes to speak, much greater success has been achieved through gesture and the use of keyboards than through vocalization, and the bodily gestures of apes in the wild are less contained by context than are their vocalizations. These observations strongly suggest that language evolved from manual gestures. p. 57



4.)
Mirror neurons are neurons in our brain that fire not in response to an action, but in response to watching someone else perform that action. They were first discovered in monkeys (sometimes called “monkey-see, monkey-do” neurons), but are present in all apes. These are part of a larger network of regions called the mirror system. It has been proposed that language grew out of this mirror system. The underlying idea is that, “[W]e perceive speech not in terms of the acoustic patterns it creates, but in terms of how we ourselves would articulate it.” (p. 61) This called the motor theory of speech perception. If this theory is true, it would point to an origin of language in gestural imitation rather than calls, which do not recruit mirror neurons in other primates.

The mirror system, in contrast to the primate vocalization system, has to do with intentional action, and is clearly modifiable through experience. For example, mirror neurons in the monkey brain respond to the sounds of certain actions, such as the tearing of paper or the cracking of nuts, and these responses can only have been learned. The neurons were not activated, though, by money calls, suggesting that vocalization itself is not part of the mirror system in monkeys…

…in the monkey, mirror neurons responded to transitive acts, as in reaching for an actual object, but do not respond to intransitive acts, where a movement is mimed and involves no object. In humans, by contrast, the mirror system responds to both transitive and intransitive acts, and the incorporation of intransitive acts would have paved the way to the understanding of acts that are symbolic rather than object-related…functional magnetic resonance imaging (fMRI) in humans shows that the mirror-neuron region of the premotor cortex is activated not only when they watch movements of the foot, hand, and mouth, but also when they read phrases pertaining to these movements. Somewhere along the line, the mirror system became interested in language. p. 62

5.) The anatomical structures in the mouth and throat required to produce something like human vocal patterns (phonemes) also came fairly late in human evolution. There is no evidence that even archaic humans could do it properly:

One requirement for articulate speech was the lowering of the larynx, creating a right-angled vocal tract that allows us to produce the wide range of vowels that characterize speech. Philip Lieberman has argued that this modification was incomplete even in the Neanderthals…Daniel Lieberman…had shown that the structure of the cranium underwent changes after we split with the Neanderthals. One such change is the shortening of the sphenoid, the central bone of the cranial base form which the face grows forward, resulting in a flattened face. The flattening may have been part of the change that created the right-angled vocal tract, with horizontal and vertical components of equal length. This is the modification that allowed us the full range of vowel sounds, from ah to oo.

Other anatomical evidence suggests that the anatomical requirements for fully articulate speech were probably not complete until late in the evolution of Homo. For example, the hypoglossal nerve, which innervates the tongue, is also is much larger in humans, perhaps reflecting the importance of tongued gestures in speech. The evidence suggests that the size of the hypoglossal canal in early australopithecenes, and perhaps in Homo habilis, was within the range of that in modern great apes, while that of the Neanderthal and early H. sapiens skulls was contained will within the modern human range, although this has been disputed.

A further clue comes from the finding that the thoracic region of the spinal cord is relatively larger in humans than in nonhuman primates, probably because breathing during speech involves extra muscles of the thorax and abdomen. Fossil evidence indicates that this enlargement was not present in early hominins or even in Homo eragaster, dating from 1.6 million years ago, but was present in several Neanderthal fossils.

Emboldened by such evidence…Phillip Lieberman has recently made the radical claim that “fully human speech anatomy first appears in the fossil record in the Upper Paleolithic (about 50,000 years ago) and is absent in both Neanderthals and earlier humans.” This provocative statement suggests that articulate speech emerged even later than the arrival of Homo sapiens some 150,000 to 200,000 years ago. While this may be an extreme conclusion, the bulk of evidence does suggest that autonomous speech emerged very late in the human repertoire…pp. 72-74

Primer: Acoustics and Physiology of Human Speech (The Scientist)

Interestingly, despite the anatomical evidence for a late development of language being fairly recent, Jaynes argued for a Pleistocene origin for speech in Homo sapiens (but not other archaic humans) back in 1976. He also implied that communication was unspoken, possibly through hand gestures much the way Corballis argues:

It is commonly thought that language is such an inherent part of the human constitution that it must go back somehow through the tribal ancestry of man to the very origin of the genus Homo, that is, for almost two million years. Most contemporary linguists of my acquaintance would like to persuade me that this is true. But with this view, I wish to totally and emphatically disagree. If early man, through these two million years, had even a primordial speech, why is there so little evidence of even simple culture or technology? For there is precious little archaeologically up to 40,000 B.C., other than the crudest of stone tools.

Sometimes the reaction to a denial that early man had speech is, how then did man function or communicate? The answer is very simple: just like all other primates with an abundance of visual and vocal signals which were very far removed from the syntactical language that we practice today. And when I even carry this speechlessness down through the Pleistocene Age, when man developed various kinds of primitive pebble choppers and hand axes, again my linguist friends lament my arrogant ignorance and swear oaths that in order to transmit even such rudimentary skills from one generation to another, there had to be language.

But consider that it is almost impossible to describe chipping flints into choppers in language. The art was transmitted solely by imitation, exactly the same way in which chimpanzees transmit the trick of inserting straws into ant hills to get ants. It is the same problem as the transmission of bicycle riding: does language assist at all?

Because language must make dramatic changes in man’s attention to things and persons, because it allows a transfer of information of enormous scope, it must have developed over a period that shows archaeologically that such changes occurred. Such a one is the late Pleistocene, roughly from 70,000 B.C. to 8000 B.C. This period was characterized climatically by wide variations in temperature, corresponding to the advance and retreat of glacial conditions, and biologically by huge migrations of animals and man caused by these changes in weather. The hominid population exploded out of the African heartland into the Eurasian subarctic and then into the Americas and Australia. The population around the Mediterranean reached a new high and took the lead in cultural innovation, transferring man’s cultural and biological focus from the tropics to the middle latitudes. His fires, caves and furs created for a man a kind of transportable microclimate that allowed these migrations to take place.

We are used to referring to these people as late Neanderthalers [sic]. At one time they were thought to be a separate species of man supplanted by Cor-Magnon man around 35,000 B.C. But the more recent view is that they were part of the general human line, which had great variation, a variation that allowed for an increasing pace of evolution, as man, taking his artificial climate with him, spread into these new ecological niches. More work needs to be done to establish the true patterns of settlement, but the most recent emphasis seems to be on its variation, some group continually moving, others making seasonal migrations, and others staying at a site all the year round.

I am emphasizing the climate changes during this last glacial age because I believe these changes were the basis of the selective pressures behind the development of language through several stages. OoCitBotBM; pp. 129-131

Thus, Jaynes falls into the camp that argues that language was the decisive factor in the transition to behavioral modernity as seen in the archaeological record (as do many others). This would also explain the relative stasis of cultures like that of Homo erectus, whose tools remained basically unchanged for thousands of years and had no signs of art, music, or any other kind of abstract thinking.

6.) People using sign language utilize the exact same areas of the brain (as shown by fMRI scans, for example) as people engaged in verbal speech.

Even in modern humans, mimed action activates the brain circuits normally thought of as dedicated to language…activities elicited activity in the left side of the brain in frontal and posterior areas–including Broca’s and Wernicke’s areas–that have been identified since the nineteenth century was the core of the language system…these areas have to do, not just with language, but with the more general linking of symbols to meaning, whether the symbols are words, gestures, images, sounds, or objects….We also know that the use of signed language in the profoundly deaf activates the same brain areas that are activated by speech…p. 64

7.) Hand gestures do not require linearalization. Corballis gives the example of an elephant and a woodshed. While some words do sound like what they describe (onomatopoeic words), most do not. In fact, they cannot. Thus, it would be difficult for sounds alone to distinguish between things such as elephants and woodsheds. Gestures, however, are much less limited in their descriptiveness.

Speech…requires that the information be linearalized, piped into a sequence of sounds that are necessarily limited in terms of how they can capture the spatial and physical natures of what they represent…Signed languages are clearly less constrained. The hands and arms can mimic the shape of real-world objects and actions, and to some extent lexical information can be delivered in parallel instead of being forced into a rigid temporal sequence. With the hands, it is almost certainly possible to distinguish an elephant from a woodshed, in purely visual terms. pp. 65-66

But see this: Linguistic study proves more than 6,000 languages use similar sounds for common words (ABC)

Over time, sounds may have supplemented hand gestures because they are not dependent on direct lines of sight. They can also transmit descriptive warning calls more effectively (“Look out, a bear is coming!”). Corballis speculates that facial gestures became increasingly incorporated with manual gestures over time, and that these facial gestures eventually also became combined with rudimentary sounds. This was the platform for the evolution of speech. Finally, freeing up the hands completely from the need for communication would have allowed for carrying objects and tool manufacture that was simultaneous with communication.

The switch, then, would have freed the hands for other activities, such as carrying and manufacture. It also allows people to speak and use tools at the same time. It might be regarded, in fact, as an early example of miniaturization, whereby gestures are squeezed from the upper body to the mouth. It also allows the development of pedagogy, enabling us to explain skilled actions while at the same time demonstrating them, as in a modern television cooking show. The freeing of the hands and the parallel use of speech may have led to significant advances in technology, and help explain why humans eventually predominated over the other large-brained hominins, including the Neanderthals, who died out some 30,000 years ago. p. 78

Incidentally, miniaturization, or at least the concept of it, also played a critical role in tool development for Homo sapiens: From Stone Age Chips to Microchips: How Tiny Tools Made Us Human (Emory University)

Eventually, speech supplanted gesture as the dominant method of communication, although hand gestures have never completely gone away, as exemplified by mimes, deaf people, and Italians. Gestures, such as pointing, mimicking, and picking things up, are all still used during the acquisition of language, as any teacher of young children will attest.

Why apes can’t talk: our study suggests they’ve got the voice but not the brains (The Conversation)

The Recursive Mind (Review) – 1

Pink Floyd does recursion

I first learned about recursion in the context of computer programming. The output of some code was fed back as an input into the same code. This kept going until some criteria was met. I’m sure every novice programmer has made the mistake where the criteria was not specified, or specified incorrectly, leading to an eternal loop. It’s practically a rite of passage in learning programming.

I would be remiss not to quote the poet laureate of recursion, Douglas Hofstaedter:

WHAT IS RECURSION? It is…nesting, and variations in nesting. The concept is very general. (Stories inside stories, movies inside movies, paintings inside paintings, Russian dolls inside Russian dolls (even paranthetical comments inside paranthetical comments!)–these are just a few of the charms of recursion.)…

Sometimes recursion seems to brush paradox very closely. For example, there are recursive definitions. Such a definition may give the casual viewer the impression that something is being defined in terms of itself. That would be circular and lead to infinite regress, if not to paradox proper. Actually, a recursive definition (when properly formulated) never leads to infinite regress or paradox. This is because a recursive definition never defines something in terms of itself, but always in terms of simpler versions of itself. GEB, Chapter V

Here’s another great example of recursion: a commemorative plaque in Toronto commemorating its own installation: A recursive plaque honoring the installation of a plaque honoring the installation of a plaque honoring the installation of…(BoingBoing)

This commemorative plaque commemorates its own dedication which commemorates its own dedication which commemorates…

Thus, I will define recursion for our purposes as the nesting of like within like. Or, rules that can apply to their own output. A common image used to show this is the Russian Matryoshka dolls, which adorn the cover of The Recursive Mind by Michael C. Corballis, the book we’ll be considering today.

These dolls work in a pretty interesting way. Within each one, there is another doll that is exactly the same. You have multiple copies of the same doll, each within another, until eventually, you get to the smallest doll.

To Understand Recursion, You Must First Understand Recursion (Words and Code)

Another example is what’s called the Droste effect, after this can of Droste’s Cacao which references itself (which references itself, and…). This effect has subsequently been replicated in a number of product packages.

Another definition is, “a procedure which calls itself, or…a constituent that contains a constituent of some kind.” Thus, recursion can be understood as both a process and a structure.

In linguistics, recursion is the unlimited extension of language. It is the ability to embed phrases within phrases and sentences within sentences resulting in the potential of a never-ending sentence.

The Recursiveness of Language – A Linkfest (A Walk in the WoRds)

You can even have a book within a book—such as, for example, The Hipcrime Vocab, the book referenced inside John Brunner’s Stand on Zanzibar, from which this blog takes its name.

Often recursive processes produce recursive structures. Not always, though. For example, an iterative structure can be derived from a recursive process. Something like

AAAAAAABBBBBB can be generated using a recursive procedure. You just nest the AB’s like so:

(A(A(A(A(AB)B)B)B)B)

But—and this turns out to be important—there is nothing in the above structure that indicates it must have been generated recursively. You could just have a series of A’s followed by a series of B’s. This may seem like a trivial point, but what it means is that there could be recursion behind something that does not seem recursive. And the reverse is also true—something might look recursive, but be generated via non-recursive means. The AB sequence shown above could be generated either way. This means that some apparent examples of recursion might actually be something else, such as repetition or iteration. As we’ll see, this means it can be quite tricky to determine whether there truly are examples of recursive thought in non-human animals or human ancestors.

Let’s start with a simple linguistic language. Let’s say I take a simple noun-verb phrase like, The dog has a ball. Let’s add another basic noun-verb phrase about the dog: The dog is brown. Each of these are standalone ideas. But I can nest them inside one another this way: The dog who is brown has a ball, or The brown dog has a ball, or The brown dog’s ball, etc.

Then let’s add this fact: The dog belongs to Erik. Therefore, Erik’s brown dog has a ball. Let’s say it’s my ball. Erik’s brown dog has my ball. Maybe the dog is barking at me right now. Erik’s brown dog, who has my ball, is barking at me right now. Do you get it? You get that Erik’s brown dog who has my ball is barking at me right now.

Anyway, we could go on doing this all day, but I think you get the point. Recursive structures can theoretically go on until infinity, but in reality are constrained. After all, there’s only so much time in the day. Corballis explains that recursive constructions need not involve embedding of exactly the same constituents, but constituents of the same kind—a process known as self-similar embedding. He gives the example of noun phrases. For example, Nusrat Fateh Ali Khan’s first album was entitled, “The Day, The Night, The Dawn, The Dusk” (you can listen to it here). That’s basically four noun phrases. From these constituents, one can make a new noun phrase like “The day gives way to night.” or perhaps a movie title like From Dusk Till Dawn.”

Recursive language is needed to express recursive thoughts, and here’s a good case to be made that recursive thought is the key to our unique cognitive abilities. This is exactly the case Corballis makes.

…recursive processes and structures can in principle extend without limit, but are limited in practice. Nevertheless, recursion does give rise to the *concept* of infinity, itself perhaps limited to the human imagination. After all, only humans have acquired the ability to count indefinitely, and to understand the nature of infinite series, whereas other species can at best merely estimate quantity, and are accurate only up to some small finite number. Even in language, we understand that a sentence can in principle be extended indefinitely, even though in practice it cannot be–although the novelist Henry James had a damn good try…

Corbalis mentioned Henry James, above, and below is his longest sentence. Click on the link to see its structure diagrammed, whereupon you can see the recursive (embedded) nature of his language more clearly.

“The house had a name and a history; the old gentleman taking his tea would have been delighted to tell you these things: how it had been built under Edward the Sixth, had offered a night’s hospitality to the great Elizabeth (whose august person had extended itself upon a huge, magnificent and terribly angular bed which still formed the principal honour of the sleeping apartments), had been a good deal bruised and defaced in Cromwell’s wars, and then, under the Restoration, repaired and much enlarged; and how, finally, after having been remodelled and disfigured in the eighteenth century, it had passed into the careful keeping of a shrewd American banker, who had bought it originally because (owing to circumstances too complicated to set forth) it was offered at a great bargain: bought it with much grumbling at its ugliness, its antiquity, its incommodity, and who now, at the end of twenty years, had become conscious of a real aesthetic passion for it, so that he knew all its points and would tell you just where to stand to see them in combination and just the hour when the shadows of its various protuberances—which fell so softly upon the warm, weary brickwork—were of the right measure.” (James 2003, 60)

The Henry James Sentence: New Quantitative Approaches (Jonathan Reeve)

The appealing aspect of recursion is that it can in principle extend indefinitely to create thoughts (and sentences) of whatever complexity is required. The idea has an elegant simplicity, giving rise to what Chomsky called “discrete infinity,” or Wilhelm Humboldt famously called “the infinite use of finite means.” And although recursion is limited in practice, we can nevertheless achieve considerable depths of recursive thought, arguably unsurpassed in any other species. In chess, for example, a player may be able to think recursively three or four steps ahead, examining possible moves and countermoves, but the number of possibilities soon multiplies beyond the capacity of the mind to hold them.

Deeper levels of recursion may be possible with the aid of writing, or simply extended time for rehearsal and contemplation, or extended memory capacity through artificial means. The slow development of a complex mathematical proof, for example, may require subtheorems within subtheorems. Plays or novels may involve recursive loops that build slowly—in Shakespeare’s Twelfth Night, for example, Maria foresees that Sir Toby will eagerly anticipate that Olivia will judge Malvolio absurdly impertinent to suppose that she wishes him to regard himself as her preferred suitor. (This is recursive embedding of mental states, in that Sir Toby’s anticipation is embedded in what Maria foresees, Olivia’s judgement is embedded in what Sir Toby anticipates, and so on).

As in fiction, so in life; we all live in a web of complex recursive relationships, and planning a dinner party may need careful attention of who things what of whom. pp. 8-9

We do indeed live in a web of complex social relationships, but some of us live in a more complex web than others. A small village in the jungle is vastly different than a medieval free city, and certainly different from a modern city of millions of people. Similarly, where one lives and what one does for a living have an effect also. A politician or businessman lives in a much more complex social world than a painter or a ratcatcher. People who grow up in a village where everyone is related to another have a much easier cognitive task than a traveling salesman, or an international diplomat.

I point all this out to prepare the way for an argument I’m going to make later on, which is my own, but loosely based on ideas from Julian Jaynes. I’m going to make the case that increasing social complexity in human societies over time selected for recursive thinking abilities. I will also argue that such abilities led to the creation of things like writing and mathematics, which emerged only several thousand years ago, and were initially the province of a small number of elites (indicating that such abilities may be quite recent). I will also argue that recursive thinking allowed for advanced organization and planning abilities, which early leaders used to justify their elevated social status. Furthermore, I will argue that the type of “reflective self” that Jaynes saw developing during the Axial Age was due to increasingly recursive modes of thought. It was not caused by social breakdown, but rather by the increasing cognitive challenges demanded by social structures, as opposed to the primarily environmental challenges that earlier humans faced. This should become clearer as we discuss the social benefits of recursive thinking below.

In other words, consciousness did not arise so much from the breakdown of the bicameral mind, as it did from the rise of the recursive mind. That’s my argument, anyway.

As recursive thinking advanced, so too did the abilities which Jaynes notes as giving rise to the construction of the reflective, vicarial self—extended metaphor, mental time travel, higher-order theory of mind, and so on, as we’ll see. The lack or paucity of recursive thought, in contrast, prior to this period, is what prevented reflective self-consciousness (or, in Jaynes’s parlance, “consciousness”) from developing. Thus my timeline is similar to Jaynes’s, as are the conclusions, but the underlying reasons differ. We’ll get into this in more depth later.

An example of the infinitely extensible nature of language a novelties like one-sentance novels. of which there are a surprisingly large amount. Here is a good review of three of the best ones where the review itself is written as a single sentence (providing yet another example of recursion!):

Awe-Inspiring One-Sentence Novels You Never Knew Existed (The GLOCAL Experience)

In 2016, an Irish novelist won a literary prize for a one-sentence novel. To me, this novel is exemplary of the kinds of recursive thinking we’re describing here, and how it’s necessary to construct the vicarial self (the Analog ‘I’ and Metaphor ‘me’). The novel demonstrates not only a highly embedded (recursive) sentence, but mental time travel, and advanced theory of mind (the ability to extrapolate the mental states of other characters by inserting oneself into their experience; a requirement of good fiction), and autobiographical narratization (about which, more below). We’ll cover each of these concepts in more depth:

It stutters into life, like a desperate incantation or a prose poem, minus full-stops but chock-full of portent: “the bell / the bell as / hearing the bell as / hearing the bell as standing here / the bell being heard standing here / hearing it ring out through the grey light of this / morning, noon or night”…The speaker hearing the bell is one Marcus Conway, husband, father and a civil engineer in some small way responsible for the wild rush of buildings, roads and bridges that disrupted life in Ireland during the boom that in the book has just gone bust. Marcus is a man gripped by “a crying sense of loneliness for my family”. We don’t quite know why until the very end of the novel, which comes both as a surprise and a confirmation of all that’s gone before.

Among its many structural and technical virtues, everything in the book is recalled, but none of it is monotonous. Marcus remembers the life of his father and his mother, for example, a world of currachs and Massey Fergusons. He recalls a fateful trip to Prague for a conference. He recalls Skyping his son in Australia, scenes of intimacy with his wife, and a trip to his artist daughter’s first solo exhibition, which consists of the text of court reports from local newspapers written in her own blood, “the full gamut from theft and domestic violence to child abuse, public order offences, illegal grazing on protected lands, petty theft, false number plates, public affray, burglary, assault and drunk-driving offences”. Above all, he remembers at work being constantly under pressure from politicians and developers, “every cunt wanting something”, the usual “shite swilling through my head, as if there weren’t enough there already”. He recalls when his wife got sick from cryptosporidiosis, “a virus derived from human waste which lodged in the digestive tract, so that […] it was now the case that the citizens were consuming their own shit, the source of their own illness”.

Single sentence novel wins Goldsmiths prize for books that ‘break the mould’ (The Guardian)

Solar Bones by Mike McCormack review – an extraordinary hymn to small-town Ireland (The Guardian)

In the example above, we can see how recursive thought is intrinsically tied to self-identity, which is in turn connected with episodic memory, which is also tied to recursion as we will see. In brief, I will argue that recursive thought is tied to the kind of reflective self-consciousness that Jaynes was describing, and, as such, we are not as much concerned with the beginnings of language as the origin of consciousness, as much as the beginnings of recursive thought as the beginning of consciousness (as I will argue). It is quite possible for spoken language to have existed for communicative purposes for thousands of years prior to recursive thought and its subsequent innovations.

I focus on two modes of thought that are recursive, and probably distinctively human. One is mental time travel, the ability to call past episodes to mind and also to imagine future episodes. This can be a recursive operation in that imagined episodes can be inserted into present consciousness, and imagined episodes can even be inserted into other imagined episodes. Mental time travel also blends into fiction, whereby we imagine events that have never occurred, or are not necessarily planned for the future. Imagined events can have all of the complexity and variability of language itself. Indeed I suggest that language emerged precisely to convey this complexity, so that we can share our memories, plans and fictions.

The second aspect of thought is what has been called theory of mind, the ability to understand what is going on in the minds of others. This too, is recursive. I may know not only what you are thinking, but I may also know that you know what I am thinking. As we shall see, most language, at least in the form of conversation, is utterly dependent on this capacity. No conversation is possible unless the participants share a common mind-set. Indeed, most conversation is fairly minimal, since the thread of conversation is largely assumed. I heard a student coming out of a lecture saying to her friend, “That was really cool.” She assumed, probably rightly, hat her friend knew exactly what “that” was, an what she meant by “cool.” pp. ix-x

It goes beyond that, however. Later, we’ll look at work by Robin Dunbar which suggests that both organized religion and complex kinship groupings are dependent upon these same recursive thought processes. Given how important these social structures are to human dominance of the planet (perhaps the most important), we can see that recursion might be the skeleton key to all the things that make us uniquely human. This is especially true given the evidence (although it is disputed) that our predecessor species (i.e. Archaic Humans and earlier) were unable to engage in this kind of thinking.