The Recursive Mind (Review) – 2

Part 1

1. Language

We’ve already covered language a bit already. A good example of language recursion is given by children’s rhymes, such as This is the House That Jack Built:

It is a cumulative tale that does not tell the story of Jack’s house, or even of Jack who built the house, but instead shows how the house is indirectly linked to other things and people, and through this method tells the story of “The man all tattered and torn”, and the “Maiden all forlorn”, as well as other smaller events, showing how these are interlinked…(Wikipedia)

“The House That Jack Built” plays on the process of embedding in English noun phrases. The nursery rhyme is one sentence that continuously grows by embedding more and more relative clauses as postmodifiers in the noun phrase that ends the sentence…In theory, we could go on forever because language relies so heavily on embedding.

The Noun Phrase (Papyr.com)

In English, clauses can be embedded either in the center, or at the end:

In “The House That Jack Built” clauses are added to the right. This is called right-embedding. Much more psychologically taxing is so-called center-embedding, where clauses are inserted in the middle of clauses. We can cope with a single embedded clauses as in:

“The malt that the the rat ate lay in the house that Jack built.”

But it becomes progressively more difficult as we add further embedded clauses:

“The malt[that the rat (that the cat killed) ate] lay in the house that Jack built.”

Or worse:

“The malt [that the rat (that the cat {that the dog chased} killed) ate] lay in the house that Jack built.

I added brackets in the last two examples that may help you see the embeddings, but even so they’re increasingly difficult to unpack. Center-embedding is difficult because words to be linked are separated by the embedded clauses; in the last example above, it was the malt that lay in the house, but the words malt and lay are separated by twelve words. In holding the word malt in mind in order to hear what happened to it, one must also deal with separations between rat and ate and between cat and killed…Center embeddings are more common in written language than in spoken language, perhaps because when language is written you can keep it in front of you indefinitely while you try to figure out the meaning….The linguistic rules that underlie our language faculty can create utterances that are potentially, if not actually, unbounded in potential length and variety. These rules are as pure and beautiful as mathematics…

The Truth About Language pp. 13-14

Or a song you may have sung when you were a child: “There Was an Old Lady who Swallowed a Fly.”

The song tells the nonsensical story of an old woman who swallows increasingly large animals, each to catch the previously swallowed animal, but dies after swallowing a horse. The humour of the song stems from the absurdity that the woman is able to inexplicably and impossibly swallow animals of preposterous sizes without dying, suggesting that she is both superhuman and immortal; however, the addition of a horse is finally enough to kill her. Her inability to survive after swallowing the horse is an event that abruptly and unexpectedly applies real-world logic to the song, directly contradicting her formerly established logic-defying animal-swallowing capability. (Wikipedia)

The structure can be expressed this way:

cow [goat (dog {cat [bird (spider {fly})]})] – after which, she swallows the horse and expires. The resulting autopsy would no doubt unfold a chain of events resembling a Matryoshka doll (or a Turducken).

Or yet another chestnut from my childhood: “There’s a Hole in My Bucket,” which is less an example of recursion that a kind of strange loop:

The song describes a deadlock situation: Henry has a leaky bucket, and Liza tells him to repair it. To fix the leaky bucket, he needs straw. To cut the straw, he needs an axe. To sharpen the axe, he needs to wet the sharpening stone. To wet the stone, he needs water. But to fetch water, he needs the bucket, which has a hole in it. (Wikipedia)

Whether all human languages have a recursive structure by default, or are at least capable of it, is one of the most controversial topics in linguistics.

Bringing more data to language debate (MIT News)

The idea that language is not just based on external stimulus, but is in some way “hard-wired” into the human brain was first developed by Noam Chomsky. He argued that this meant that grammatical constructions were somehow based on the brain’s inner workings (i.e. how the brain formulates thoughts internally), and therefore all languages would exhibit similar underlying structures, something which he called the “Universal Grammar.”

Furthermore, he argued that language construction at its most fundamental level could be reduced to a single recursive operation he called Merge. This was part of his so-called “Minimalist Program” of language construction.

Merge is…when two syntactic objects are combined to form a new syntactic unit (a set).

Merge also has the property of recursion in that it may apply to its own output: the objects combined by Merge are either lexical items or sets that were themselves formed by Merge.

This recursive property of Merge has been claimed to be a fundamental characteristic that distinguishes language from other cognitive faculties. As Noam Chomsky (1999) puts it, Merge is “an indispensable operation of a recursive system … which takes two syntactic objects A and B and forms the new object G={A,B}” (Wikipedia)

The Merge applies to I-language, the thinking behind language, whereas the language spoken out loud is translated into what he calls E-Language (E for external). Corballis explains:

The Merge operation…strictly hold for what Chomsky calls I-language, which is the internal language of thought, and need not apply directly to E-language, which is external language as actually spoken or signed. In mapping I-language to E-language, various supplementary principles are needed. For instance…the merging of ‘Jane loves John’ with ‘Jane flies airplanes’ to get, ‘Jane, who flies airplanes, loves John’ requires extra rules to introduce the word who and delete one copy of the word Jane.

I-language will map onto different E-languages in different ways. Chomsky’s notion of unbounded Merge, recursively applied, is therefore essentially an idealization, inferred from the study of external languages, but is not in itself directly observable. pp. 23-24

It’s notable that whatever the other merits of the Merge, it does appear to be a good description of how language is extended via metaphor. I recently ran across a good example of this: the word inosculation, meaning to homogenize, make continuous or interjoin. It’s root is the verb “to kiss”, which itself is derived from the word for “mouth.” This word, like so many others, was created through recursion and metaphor.

From in- +‎ osculate, from Latin ōsculātus (“kiss”), from ōs + -culus (“little mouth”).

The sheer diversity of human languages that have been found and studied has put Chomsky’s Universal Grammar theory on the cooler. There does not seem to be any sort of universal grammar that we can find, nor a universal method of thought which underlies it. A few languages have been discovered that do not appear to use recursion, most famously the Pirahã language of the Amazon, but also the Iatmul language of New Guinea and some Australian languages spoken in West Arnhem land. For example, the phrase “They stood watching us fight” would be rendered as “They stood/they were watching us/we were fighting.” in Bininj Gun-Wok (p. 27)

Recursion and Human Thought: A Talk with Daniel Everett (Edge Magazine)

It continues to be debated whether animals have a capacity for recursive expression. A study on 2006 argued that starling calls exhibited a recursive quality, but this has been questioned. As I mentioned earlier, it is often difficult to tell if something which may appear recursive is actually generated recursively.

Starlings vs. Chomsky (Discover)

Corballis argues here (as he has in other books, which I will also refer to), that the human mental capacity for language evolved first via gesticulation (hand gestures), rather than verbal sounds (speech). Only much later, he argues, did communication switch from primarily hand gestures to speech. “I have argued…that the origins of language lie in manual gestures, and the most language-like behavior in nonhuman species is gestural.” (p. 161) Some reasons he gives for believing this are:

1.) We have had extensive control over our arms, hands and fingers (as demonstrated by tool use and manufacture, for example) for a millions of years, but the fine motor control over our lungs and vocal tract required to produce articulate speech is of far more recent vintage. It is also unique to our species—other apes don’t have the control over the lungs or mouth required for speech. In fact, the unique control that humans possess over their breathing leads Corballis to speculate that distant human ancestors must have spent considerable time diving in water, which requires extensive breath control. Human babies, for example, instinctively know to hold their breath in water in a way that other apes—including our closest relatives—cannot. This leads him to endorse an updated version of the Aquatic Ape theory called the Littoral hypothesis, or the Beachcomber theory.

In an extensive discussion of the aquatic ape hypothesis and the various controversies surrounding it, Mark Verhaegen suggests that our apelike ancestors led when he calls an aquarborial life, on the borders between forest and swamp lands. There they fed on shellfish and other waterbourne foods, as well as on plants and animals in the neighboring forested area…In this view, the environment that first shaped our evolution as humans was not so much the savanna as the beach. During the Ice ages the sea levels dropped, opening up territory rich in shellfish but largely devoid of trees. Our early Pleistocene forebears dispersed along the coasts, and fossils have been discovered not only in Africa but as far away as Indonesia, Georgia, and even England. Stone tools were first developed not so much for cutting carcasses of game killed on land as for opening and manipulating shells. Bipedalism too was an adaptation not so much for walking and running as for swimming and shallow diving.

Verhaegen lists a number of features that seem to have emerged only in Pleistocene fossils, some of which are present in other diving species but not in pre-Pleistocene hominins. These include loss of fur; an external nose; a large head; and head, body, and legs all in a straight line. The upright stance may have helped individuals stand tall and spot shellfish in the shallow water. Later, in the Pleistocene, different Homo populations ventured inland along rivers and perhaps then evolved characteristics more suited to hunting land-based animals. The ability to run, for instance, seems to have evolved later in the Pleistocene. But Verhaegen suggest that, in fact, we are poorly adapted to a dry, savannalike environment and retain many littoral adaptations (that is, adaptations to coastal regions): “We have a water- and sodium-wasting cooling system of abundant sweat glands, totally unfit for a dry environment. Our maximal urine concentration is much too low for a savanna-dwelling mammal. We need much more water than other primates and have to drink more often than savanna inhabitants, yet we cannot drink large quantities at a time.

Part of the reason for our swollen brains may derive from a diet of shellfish and other fish accessible the shallow-water foraging [sic]. Seafood supplies docosahexaenoic acid (DHA), an omega 3 fatty acid, and some have suggested that it was this that drive the increase in brain size, reinforcing the emergence of language and social intelligence.

Michael A. Crawford and colleagues have long proposed that we still need to supplement our diets with DHA and other seafoods to maintain fitness. Echoing Crawford, Marcos Duarte issues a grim warning: “The sharp rise in brain disorders, which, in many developed countries, involves social costs exceeding those of heart disease and cancer combined, has been deemed the most worrying change in disease pattern in modern societies, calling for urgent consideration of seafood requirements to supply the omega 3 and DHA required for brain health.
The Truth About Language: What It Is and Where It Came From; pp. 95-97

2.) Chimpanzees appear to have little control over the types of sounds that they make. Vocalization in primates appears to be largely instinctual, and not under conscious control.

3.) Although apes such as chimpanzees, bonobos and gorillas cannot learn spoken language, they can be taught to communicate with humans using sign language. Apes have learned vocabularies of several thousand signed words, most notably Koko the gorilla, and the bonobo Kanzi.

Manual activity in primates is intentional and subject to learning, whereas vocalizations appear to be largely involuntary and fixed. In teaching great apes to speak, much greater success has been achieved through gesture and the use of keyboards than through vocalization, and the bodily gestures of apes in the wild are less contained by context than are their vocalizations. These observations strongly suggest that language evolved from manual gestures. p. 57



4.)
Mirror neurons are neurons in our brain that fire not in response to an action, but in response to watching someone else perform that action. They were first discovered in monkeys (sometimes called “monkey-see, monkey-do” neurons), but are present in all apes. These are part of a larger network of regions called the mirror system. It has been proposed that language grew out of this mirror system. The underlying idea is that, “[W]e perceive speech not in terms of the acoustic patterns it creates, but in terms of how we ourselves would articulate it.” (p. 61) This called the motor theory of speech perception. If this theory is true, it would point to an origin of language in gestural imitation rather than calls, which do not recruit mirror neurons in other primates.

The mirror system, in contrast to the primate vocalization system, has to do with intentional action, and is clearly modifiable through experience. For example, mirror neurons in the monkey brain respond to the sounds of certain actions, such as the tearing of paper or the cracking of nuts, and these responses can only have been learned. The neurons were not activated, though, by money calls, suggesting that vocalization itself is not part of the mirror system in monkeys…

…in the monkey, mirror neurons responded to transitive acts, as in reaching for an actual object, but do not respond to intransitive acts, where a movement is mimed and involves no object. In humans, by contrast, the mirror system responds to both transitive and intransitive acts, and the incorporation of intransitive acts would have paved the way to the understanding of acts that are symbolic rather than object-related…functional magnetic resonance imaging (fMRI) in humans shows that the mirror-neuron region of the premotor cortex is activated not only when they watch movements of the foot, hand, and mouth, but also when they read phrases pertaining to these movements. Somewhere along the line, the mirror system became interested in language. p. 62

5.) The anatomical structures in the mouth and throat required to produce something like human vocal patterns (phonemes) also came fairly late in human evolution. There is no evidence that even archaic humans could do it properly:

One requirement for articulate speech was the lowering of the larynx, creating a right-angled vocal tract that allows us to produce the wide range of vowels that characterize speech. Philip Lieberman has argued that this modification was incomplete even in the Neanderthals…Daniel Lieberman…had shown that the structure of the cranium underwent changes after we split with the Neanderthals. One such change is the shortening of the sphenoid, the central bone of the cranial base form which the face grows forward, resulting in a flattened face. The flattening may have been part of the change that created the right-angled vocal tract, with horizontal and vertical components of equal length. This is the modification that allowed us the full range of vowel sounds, from ah to oo.

Other anatomical evidence suggests that the anatomical requirements for fully articulate speech were probably not complete until late in the evolution of Homo. For example, the hypoglossal nerve, which innervates the tongue, is also is much larger in humans, perhaps reflecting the importance of tongued gestures in speech. The evidence suggests that the size of the hypoglossal canal in early australopithecenes, and perhaps in Homo habilis, was within the range of that in modern great apes, while that of the Neanderthal and early H. sapiens skulls was contained will within the modern human range, although this has been disputed.

A further clue comes from the finding that the thoracic region of the spinal cord is relatively larger in humans than in nonhuman primates, probably because breathing during speech involves extra muscles of the thorax and abdomen. Fossil evidence indicates that this enlargement was not present in early hominins or even in Homo eragaster, dating from 1.6 million years ago, but was present in several Neanderthal fossils.

Emboldened by such evidence…Phillip Lieberman has recently made the radical claim that “fully human speech anatomy first appears in the fossil record in the Upper Paleolithic (about 50,000 years ago) and is absent in both Neanderthals and earlier humans.” This provocative statement suggests that articulate speech emerged even later than the arrival of Homo sapiens some 150,000 to 200,000 years ago. While this may be an extreme conclusion, the bulk of evidence does suggest that autonomous speech emerged very late in the human repertoire…pp. 72-74

Primer: Acoustics and Physiology of Human Speech (The Scientist)

Interestingly, despite the anatomical evidence for a late development of language being fairly recent, Jaynes argued for a Pleistocene origin for speech in Homo sapiens (but not other archaic humans) back in 1976. He also implied that communication was unspoken, possibly through hand gestures much the way Corballis argues:

It is commonly thought that language is such an inherent part of the human constitution that it must go back somehow through the tribal ancestry of man to the very origin of the genus Homo, that is, for almost two million years. Most contemporary linguists of my acquaintance would like to persuade me that this is true. But with this view, I wish to totally and emphatically disagree. If early man, through these two million years, had even a primordial speech, why is there so little evidence of even simple culture or technology? For there is precious little archaeologically up to 40,000 B.C., other than the crudest of stone tools.

Sometimes the reaction to a denial that early man had speech is, how then did man function or communicate? The answer is very simple: just like all other primates with an abundance of visual and vocal signals which were very far removed from the syntactical language that we practice today. And when I even carry this speechlessness down through the Pleistocene Age, when man developed various kinds of primitive pebble choppers and hand axes, again my linguist friends lament my arrogant ignorance and swear oaths that in order to transmit even such rudimentary skills from one generation to another, there had to be language.

But consider that it is almost impossible to describe chipping flints into choppers in language. The art was transmitted solely by imitation, exactly the same way in which chimpanzees transmit the trick of inserting straws into ant hills to get ants. It is the same problem as the transmission of bicycle riding: does language assist at all?

Because language must make dramatic changes in man’s attention to things and persons, because it allows a transfer of information of enormous scope, it must have developed over a period that shows archaeologically that such changes occurred. Such a one is the late Pleistocene, roughly from 70,000 B.C. to 8000 B.C. This period was characterized climatically by wide variations in temperature, corresponding to the advance and retreat of glacial conditions, and biologically by huge migrations of animals and man caused by these changes in weather. The hominid population exploded out of the African heartland into the Eurasian subarctic and then into the Americas and Australia. The population around the Mediterranean reached a new high and took the lead in cultural innovation, transferring man’s cultural and biological focus from the tropics to the middle latitudes. His fires, caves and furs created for a man a kind of transportable microclimate that allowed these migrations to take place.

We are used to referring to these people as late Neanderthalers [sic]. At one time they were thought to be a separate species of man supplanted by Cor-Magnon man around 35,000 B.C. But the more recent view is that they were part of the general human line, which had great variation, a variation that allowed for an increasing pace of evolution, as man, taking his artificial climate with him, spread into these new ecological niches. More work needs to be done to establish the true patterns of settlement, but the most recent emphasis seems to be on its variation, some group continually moving, others making seasonal migrations, and others staying at a site all the year round.

I am emphasizing the climate changes during this last glacial age because I believe these changes were the basis of the selective pressures behind the development of language through several stages. OoCitBotBM; pp. 129-131

Thus, Jaynes falls into the camp that argues that language was the decisive factor in the transition to behavioral modernity as seen in the archaeological record (as do many others). This would also explain the relative stasis of cultures like that of Homo erectus, whose tools remained basically unchanged for thousands of years and had no signs of art, music, or any other kind of abstract thinking.

6.) People using sign language utilize the exact same areas of the brain (as shown by fMRI scans, for example) as people engaged in verbal speech.

Even in modern humans, mimed action activates the brain circuits normally thought of as dedicated to language…activities elicited activity in the left side of the brain in frontal and posterior areas–including Broca’s and Wernicke’s areas–that have been identified since the nineteenth century was the core of the language system…these areas have to do, not just with language, but with the more general linking of symbols to meaning, whether the symbols are words, gestures, images, sounds, or objects….We also know that the use of signed language in the profoundly deaf activates the same brain areas that are activated by speech…p. 64

7.) Hand gestures do not require linearalization. Corballis gives the example of an elephant and a woodshed. While some words do sound like what they describe (onomatopoeic words), most do not. In fact, they cannot. Thus, it would be difficult for sounds alone to distinguish between things such as elephants and woodsheds. Gestures, however, are much less limited in their descriptiveness.

Speech…requires that the information be linearalized, piped into a sequence of sounds that are necessarily limited in terms of how they can capture the spatial and physical natures of what they represent…Signed languages are clearly less constrained. The hands and arms can mimic the shape of real-world objects and actions, and to some extent lexical information can be delivered in parallel instead of being forced into a rigid temporal sequence. With the hands, it is almost certainly possible to distinguish an elephant from a woodshed, in purely visual terms. pp. 65-66

But see this: Linguistic study proves more than 6,000 languages use similar sounds for common words (ABC)

Over time, sounds may have supplemented hand gestures because they are not dependent on direct lines of sight. They can also transmit descriptive warning calls more effectively (“Look out, a bear is coming!”). Corballis speculates that facial gestures became increasingly incorporated with manual gestures over time, and that these facial gestures eventually also became combined with rudimentary sounds. This was the platform for the evolution of speech. Finally, freeing up the hands completely from the need for communication would have allowed for carrying objects and tool manufacture that was simultaneous with communication.

The switch, then, would have freed the hands for other activities, such as carrying and manufacture. It also allows people to speak and use tools at the same time. It might be regarded, in fact, as an early example of miniaturization, whereby gestures are squeezed from the upper body to the mouth. It also allows the development of pedagogy, enabling us to explain skilled actions while at the same time demonstrating them, as in a modern television cooking show. The freeing of the hands and the parallel use of speech may have led to significant advances in technology, and help explain why humans eventually predominated over the other large-brained hominins, including the Neanderthals, who died out some 30,000 years ago. p. 78

Incidentally, miniaturization, or at least the concept of it, also played a critical role in tool development for Homo sapiens: From Stone Age Chips to Microchips: How Tiny Tools Made Us Human (Emory University)

Eventually, speech supplanted gesture as the dominant method of communication, although hand gestures have never completely gone away, as exemplified by mimes, deaf people, and Italians. Gestures, such as pointing, mimicking, and picking things up, are all still used during the acquisition of language, as any teacher of young children will attest.

Why apes can’t talk: our study suggests they’ve got the voice but not the brains (The Conversation)

The Recursive Mind (Review) – 1

Pink Floyd does recursion

I first learned about recursion in the context of computer programming. The output of some code was fed back as an input into the same code. This kept going until some criteria was met. I’m sure every novice programmer has made the mistake where the criteria was not specified, or specified incorrectly, leading to an eternal loop. It’s practically a rite of passage in learning programming.

I would be remiss not to quote the poet laureate of recursion, Douglas Hofstaedter:

WHAT IS RECURSION? It is…nesting, and variations in nesting. The concept is very general. (Stories inside stories, movies inside movies, paintings inside paintings, Russian dolls inside Russian dolls (even paranthetical comments inside paranthetical comments!)–these are just a few of the charms of recursion.)…

Sometimes recursion seems to brush paradox very closely. For example, there are recursive definitions. Such a definition may give the casual viewer the impression that something is being defined in terms of itself. That would be circular and lead to infinite regress, if not to paradox proper. Actually, a recursive definition (when properly formulated) never leads to infinite regress or paradox. This is because a recursive definition never defines something in terms of itself, but always in terms of simpler versions of itself. GEB, Chapter V

Here’s another great example of recursion: a commemorative plaque in Toronto commemorating its own installation: A recursive plaque honoring the installation of a plaque honoring the installation of a plaque honoring the installation of…(BoingBoing)

This commemorative plaque commemorates its own dedication which commemorates its own dedication which commemorates…

Thus, I will define recursion for our purposes as the nesting of like within like. Or, rules that can apply to their own output. A common image used to show this is the Russian Matryoshka dolls, which adorn the cover of The Recursive Mind by Michael C. Corballis, the book we’ll be considering today.

These dolls work in a pretty interesting way. Within each one, there is another doll that is exactly the same. You have multiple copies of the same doll, each within another, until eventually, you get to the smallest doll.

To Understand Recursion, You Must First Understand Recursion (Words and Code)

Another example is what’s called the Droste effect, after this can of Droste’s Cacao which references itself (which references itself, and…). This effect has subsequently been replicated in a number of product packages.

Another definition is, “a procedure which calls itself, or…a constituent that contains a constituent of some kind.” Thus, recursion can be understood as both a process and a structure.

In linguistics, recursion is the unlimited extension of language. It is the ability to embed phrases within phrases and sentences within sentences resulting in the potential of a never-ending sentence.

The Recursiveness of Language – A Linkfest (A Walk in the WoRds)

You can even have a book within a book—such as, for example, The Hipcrime Vocab, the book referenced inside John Brunner’s Stand on Zanzibar, from which this blog takes its name.

Often recursive processes produce recursive structures. Not always, though. For example, an iterative structure can be derived from a recursive process. Something like

AAAAAAABBBBBB can be generated using a recursive procedure. You just nest the AB’s like so:

(A(A(A(A(AB)B)B)B)B)

But—and this turns out to be important—there is nothing in the above structure that indicates it must have been generated recursively. You could just have a series of A’s followed by a series of B’s. This may seem like a trivial point, but what it means is that there could be recursion behind something that does not seem recursive. And the reverse is also true—something might look recursive, but be generated via non-recursive means. The AB sequence shown above could be generated either way. This means that some apparent examples of recursion might actually be something else, such as repetition or iteration. As we’ll see, this means it can be quite tricky to determine whether there truly are examples of recursive thought in non-human animals or human ancestors.

Let’s start with a simple linguistic language. Let’s say I take a simple noun-verb phrase like, The dog has a ball. Let’s add another basic noun-verb phrase about the dog: The dog is brown. Each of these are standalone ideas. But I can nest them inside one another this way: The dog who is brown has a ball, or The brown dog has a ball, or The brown dog’s ball, etc.

Then let’s add this fact: The dog belongs to Erik. Therefore, Erik’s brown dog has a ball. Let’s say it’s my ball. Erik’s brown dog has my ball. Maybe the dog is barking at me right now. Erik’s brown dog, who has my ball, is barking at me right now. Do you get it? You get that Erik’s brown dog who has my ball is barking at me right now.

Anyway, we could go on doing this all day, but I think you get the point. Recursive structures can theoretically go on until infinity, but in reality are constrained. After all, there’s only so much time in the day. Corballis explains that recursive constructions need not involve embedding of exactly the same constituents, but constituents of the same kind—a process known as self-similar embedding. He gives the example of noun phrases. For example, Nusrat Fateh Ali Khan’s first album was entitled, “The Day, The Night, The Dawn, The Dusk” (you can listen to it here). That’s basically four noun phrases. From these constituents, one can make a new noun phrase like “The day gives way to night.” or perhaps a movie title like From Dusk Till Dawn.”

Recursive language is needed to express recursive thoughts, and here’s a good case to be made that recursive thought is the key to our unique cognitive abilities. This is exactly the case Corballis makes.

…recursive processes and structures can in principle extend without limit, but are limited in practice. Nevertheless, recursion does give rise to the *concept* of infinity, itself perhaps limited to the human imagination. After all, only humans have acquired the ability to count indefinitely, and to understand the nature of infinite series, whereas other species can at best merely estimate quantity, and are accurate only up to some small finite number. Even in language, we understand that a sentence can in principle be extended indefinitely, even though in practice it cannot be–although the novelist Henry James had a damn good try…

Corbalis mentioned Henry James, above, and below is his longest sentence. Click on the link to see its structure diagrammed, whereupon you can see the recursive (embedded) nature of his language more clearly.

“The house had a name and a history; the old gentleman taking his tea would have been delighted to tell you these things: how it had been built under Edward the Sixth, had offered a night’s hospitality to the great Elizabeth (whose august person had extended itself upon a huge, magnificent and terribly angular bed which still formed the principal honour of the sleeping apartments), had been a good deal bruised and defaced in Cromwell’s wars, and then, under the Restoration, repaired and much enlarged; and how, finally, after having been remodelled and disfigured in the eighteenth century, it had passed into the careful keeping of a shrewd American banker, who had bought it originally because (owing to circumstances too complicated to set forth) it was offered at a great bargain: bought it with much grumbling at its ugliness, its antiquity, its incommodity, and who now, at the end of twenty years, had become conscious of a real aesthetic passion for it, so that he knew all its points and would tell you just where to stand to see them in combination and just the hour when the shadows of its various protuberances—which fell so softly upon the warm, weary brickwork—were of the right measure.” (James 2003, 60)

The Henry James Sentence: New Quantitative Approaches (Jonathan Reeve)

The appealing aspect of recursion is that it can in principle extend indefinitely to create thoughts (and sentences) of whatever complexity is required. The idea has an elegant simplicity, giving rise to what Chomsky called “discrete infinity,” or Wilhelm Humboldt famously called “the infinite use of finite means.” And although recursion is limited in practice, we can nevertheless achieve considerable depths of recursive thought, arguably unsurpassed in any other species. In chess, for example, a player may be able to think recursively three or four steps ahead, examining possible moves and countermoves, but the number of possibilities soon multiplies beyond the capacity of the mind to hold them.

Deeper levels of recursion may be possible with the aid of writing, or simply extended time for rehearsal and contemplation, or extended memory capacity through artificial means. The slow development of a complex mathematical proof, for example, may require subtheorems within subtheorems. Plays or novels may involve recursive loops that build slowly—in Shakespeare’s Twelfth Night, for example, Maria foresees that Sir Toby will eagerly anticipate that Olivia will judge Malvolio absurdly impertinent to suppose that she wishes him to regard himself as her preferred suitor. (This is recursive embedding of mental states, in that Sir Toby’s anticipation is embedded in what Maria foresees, Olivia’s judgement is embedded in what Sir Toby anticipates, and so on).

As in fiction, so in life; we all live in a web of complex recursive relationships, and planning a dinner party may need careful attention of who things what of whom. pp. 8-9

We do indeed live in a web of complex social relationships, but some of us live in a more complex web than others. A small village in the jungle is vastly different than a medieval free city, and certainly different from a modern city of millions of people. Similarly, where one lives and what one does for a living have an effect also. A politician or businessman lives in a much more complex social world than a painter or a ratcatcher. People who grow up in a village where everyone is related to another have a much easier cognitive task than a traveling salesman, or an international diplomat.

I point all this out to prepare the way for an argument I’m going to make later on, which is my own, but loosely based on ideas from Julian Jaynes. I’m going to make the case that increasing social complexity in human societies over time selected for recursive thinking abilities. I will also argue that such abilities led to the creation of things like writing and mathematics, which emerged only several thousand years ago, and were initially the province of a small number of elites (indicating that such abilities may be quite recent). I will also argue that recursive thinking allowed for advanced organization and planning abilities, which early leaders used to justify their elevated social status. Furthermore, I will argue that the type of “reflective self” that Jaynes saw developing during the Axial Age was due to increasingly recursive modes of thought. It was not caused by social breakdown, but rather by the increasing cognitive challenges demanded by social structures, as opposed to the primarily environmental challenges that earlier humans faced. This should become clearer as we discuss the social benefits of recursive thinking below.

In other words, consciousness did not arise so much from the breakdown of the bicameral mind, as it did from the rise of the recursive mind. That’s my argument, anyway.

As recursive thinking advanced, so too did the abilities which Jaynes notes as giving rise to the construction of the reflective, vicarial self—extended metaphor, mental time travel, higher-order theory of mind, and so on, as we’ll see. The lack or paucity of recursive thought, in contrast, prior to this period, is what prevented reflective self-consciousness (or, in Jaynes’s parlance, “consciousness”) from developing. Thus my timeline is similar to Jaynes’s, as are the conclusions, but the underlying reasons differ. We’ll get into this in more depth later.

An example of the infinitely extensible nature of language a novelties like one-sentance novels. of which there are a surprisingly large amount. Here is a good review of three of the best ones where the review itself is written as a single sentence (providing yet another example of recursion!):

Awe-Inspiring One-Sentence Novels You Never Knew Existed (The GLOCAL Experience)

In 2016, an Irish novelist won a literary prize for a one-sentence novel. To me, this novel is exemplary of the kinds of recursive thinking we’re describing here, and how it’s necessary to construct the vicarial self (the Analog ‘I’ and Metaphor ‘me’). The novel demonstrates not only a highly embedded (recursive) sentence, but mental time travel, and advanced theory of mind (the ability to extrapolate the mental states of other characters by inserting oneself into their experience; a requirement of good fiction), and autobiographical narratization (about which, more below). We’ll cover each of these concepts in more depth:

It stutters into life, like a desperate incantation or a prose poem, minus full-stops but chock-full of portent: “the bell / the bell as / hearing the bell as / hearing the bell as standing here / the bell being heard standing here / hearing it ring out through the grey light of this / morning, noon or night”…The speaker hearing the bell is one Marcus Conway, husband, father and a civil engineer in some small way responsible for the wild rush of buildings, roads and bridges that disrupted life in Ireland during the boom that in the book has just gone bust. Marcus is a man gripped by “a crying sense of loneliness for my family”. We don’t quite know why until the very end of the novel, which comes both as a surprise and a confirmation of all that’s gone before.

Among its many structural and technical virtues, everything in the book is recalled, but none of it is monotonous. Marcus remembers the life of his father and his mother, for example, a world of currachs and Massey Fergusons. He recalls a fateful trip to Prague for a conference. He recalls Skyping his son in Australia, scenes of intimacy with his wife, and a trip to his artist daughter’s first solo exhibition, which consists of the text of court reports from local newspapers written in her own blood, “the full gamut from theft and domestic violence to child abuse, public order offences, illegal grazing on protected lands, petty theft, false number plates, public affray, burglary, assault and drunk-driving offences”. Above all, he remembers at work being constantly under pressure from politicians and developers, “every cunt wanting something”, the usual “shite swilling through my head, as if there weren’t enough there already”. He recalls when his wife got sick from cryptosporidiosis, “a virus derived from human waste which lodged in the digestive tract, so that […] it was now the case that the citizens were consuming their own shit, the source of their own illness”.

Single sentence novel wins Goldsmiths prize for books that ‘break the mould’ (The Guardian)

Solar Bones by Mike McCormack review – an extraordinary hymn to small-town Ireland (The Guardian)

In the example above, we can see how recursive thought is intrinsically tied to self-identity, which is in turn connected with episodic memory, which is also tied to recursion as we will see. In brief, I will argue that recursive thought is tied to the kind of reflective self-consciousness that Jaynes was describing, and, as such, we are not as much concerned with the beginnings of language as the origin of consciousness, as much as the beginnings of recursive thought as the beginning of consciousness (as I will argue). It is quite possible for spoken language to have existed for communicative purposes for thousands of years prior to recursive thought and its subsequent innovations.

I focus on two modes of thought that are recursive, and probably distinctively human. One is mental time travel, the ability to call past episodes to mind and also to imagine future episodes. This can be a recursive operation in that imagined episodes can be inserted into present consciousness, and imagined episodes can even be inserted into other imagined episodes. Mental time travel also blends into fiction, whereby we imagine events that have never occurred, or are not necessarily planned for the future. Imagined events can have all of the complexity and variability of language itself. Indeed I suggest that language emerged precisely to convey this complexity, so that we can share our memories, plans and fictions.

The second aspect of thought is what has been called theory of mind, the ability to understand what is going on in the minds of others. This too, is recursive. I may know not only what you are thinking, but I may also know that you know what I am thinking. As we shall see, most language, at least in the form of conversation, is utterly dependent on this capacity. No conversation is possible unless the participants share a common mind-set. Indeed, most conversation is fairly minimal, since the thread of conversation is largely assumed. I heard a student coming out of a lecture saying to her friend, “That was really cool.” She assumed, probably rightly, hat her friend knew exactly what “that” was, an what she meant by “cool.” pp. ix-x

It goes beyond that, however. Later, we’ll look at work by Robin Dunbar which suggests that both organized religion and complex kinship groupings are dependent upon these same recursive thought processes. Given how important these social structures are to human dominance of the planet (perhaps the most important), we can see that recursion might be the skeleton key to all the things that make us uniquely human. This is especially true given the evidence (although it is disputed) that our predecessor species (i.e. Archaic Humans and earlier) were unable to engage in this kind of thinking.

Is Religion Merely A Cognitive Error?

One reason I’m intrigued by Jaynes’s idea is that it’s simply hard to explain the centrality of religion to ancient societies without recourse to something more than simple “cognitive errors.” After all, religion is costly. Think of all the time and energy that went into worshiping–whether it is elaborate rituals, lavish burials with grave goods, tombs, barrows and tumuli, sacrifices of both people and animals, dances and festivals, elaborate paintings and sculpture, and, of course, temples. Why didn’t atheistic societies take over societies that wasted huge amounts of resources in this way?

The conventional wisdom is that religion was necessary for group cohesion in the days before bureaucracy, written documents, centralized government, and related institutions. But something about that seems inadequate to me. Does one need to build pyramids to have a cohesive society? Does one need to bury their ruler with hundreds of terracotta warriors? Think of all the fantastic works of art, sculpture, and craftsmanship that were made simply to be sealed up in the tombs of Egypt and elsewhere. Think of all the craftsmanship that went into something like Tutankhamen’s death mask, for example. Even as far back as 34,000 years ago, people were burying some of their most labor-intensive goods in the ground.

Another school of thought has it down as just a massive case of collective denial. While denial is not just a river in Egypt, it is near that river that we see some of it’s most impressive manifestations. The idea is that by building structures that last longer than we do, we transcend death–that is, we conquer, in some sense, our own mortality. But why does everyone else go along with this? Were the workers just as motivated to deny their own death by working on the pyramids, despite no one knowing who they were?

Why not put all that effort into making real warriors and stone fortifications and take over one’s more superstitious neighbors bowing down to graven idols? Why not trade your highest quality stuff in markets instead of burying it or sealing it up forever in some tomb?

And that’s before we consider all the other strange behaviors. I’ve previously mentioned trepanation. From my (albeit limited) research, the two types of people who poke holes in their heads in modern times are these: voice hearers and LSD trippers. And what’s up with all the sacrifices?

Tower of human skulls found in Mexico City dig casts light on Aztec sacrifices (The Guardian)

Bowls of Fingers, Baby Victims, More Found in Maya Tomb (National Geographic)

When you start studying this stuff in depth, you realize that pretty much everything flowed from primitive religion in some way: politics, laws, marriage customs, inheritance, economic relationships, business partnerships, child-rearing, the status of women, family structures, and so on. Essentially, all laws and politics stemmed from religion. Huge amounts of social effort went into appeasing the gods. That’s one hell of a cognitive error!

Just how essential religion was to ancient cultures is summed up by this passage from The Ancient City:

A comparison of beliefs and laws shows that a primitive religion constituted the Greek and Roman family, established marriage and paternal authority, fixed the order of relationship, and consecrated the right of property, and the right of inheritance. This same religion, after having enlarged and extended the family, formed a still larger association, the city, and reigned in that as it had reigned in the family.

From [religion] came all the institutions, as well as all the private law, of the ancients. It was from this that the city received all its principles, its rules, its usages, and its magistracies. But, in the course of time, this ancient religion became modified or effaced, and private law and political institutions were modified with it. Then came a series of revolutions, and social changes regularly followed the development of knowledge.

It is of the first importance, therefore, to study the religious ideas of these peoples, and the oldest are the most important for us to know. For the institutions and beliefs which we find at the flourishing periods of Greece and Rome are only the development of those of an earlier age; we must seek the roots of them in the very distant past.

E. E. Evans-Pritchard summarized de Coulanges’ thesis this way:

The theme of The Ancient City is that ancient classical society was centred in the family in the wide sense of that word— joint family or lineage — and that what held this group of agnates together as a corporation and gave it permanence was the ancestor cult, in which the head of the family acted as priest.

In the light of this central idea, and only in the light of it, of the dead being deities of the family, all customs of the period can be understood: marriage regulations and ceremonies, monogamy, prohibition of divorce, interdiction of celibacy, the levirate, adoption, paternal authority, rules of descent, inheritance and succession, laws, property, the systems of nomenclature, the calendar, slavery and clientship, and many other customs. When city states developed, they were in the same structural pattern as had been shaped by religion in these earlier social conditions.

Traditions are basically dead people peer pressuring us. (Reddit Showerthoughts)

What appears to tie all of these together is ritual ancestor worship, also called veneration of the dead, ancestral veneration, or the cult of the dead. An ancestor cult is simply defined as, “The continuing care of the dead under the assumption of their power”. And you see this emerging as religion all larger, complex societies, from the New World to the Classical World to India to China to Indonesia. In China, especially, ancestral veneration was central to religious practice until relatively modern times, existing alongside philosophies like Taoism and Buddhism. In all of these societies, there seems to have been two parallel worships: the ancestor cult and a pantheon of deities who had some kind of power over the natural world.

Another thing you see repeatedly is the idea of a “layered world,” most likely derived from Shamanistic practices. There are always a minimum of three: the central world inhabited by humans, a lower world inhabited by the dead, and an upper world inhabited by gods. Some cosmologies add more–there are nine words in Norse cosmology, for example. There is also some sort of connector between the worlds. In Norse mythology, it was the world tree, Yggdrasil; in China is was the Celestial Pole. Many of these religions, especially those of early complex societies like ancient Egypt, Babylonia, China, and the Mayans, have a clear astrological basis as well: “Chinese theology may be also called Tiānxué 天學 (“study of Heaven”), a term already in use in the 17th and 18th century.” (Wikipedia)

The sheer universality of this phenomenon must have some sort of significance. Why do so many ancient societies worship their dead? Does it have something to do with the fact that, according to scientific surveys, a huge amount of people report hearing, feeling, or even seeing their dead relatives during the grieving process? If you ask, me, there’s been far too little overlap between anthropology and psychology.

I’m struck by just how similar Eurasian practices are among cultures that could not have possibly acquired them by cultural diffusion. For example, I was listening to a TS podcast with a Balinese art expert. He pointed out that although Bali is known for Hinduism, what’s lesser-known is that the original religion of Bali was ancestor worship, which is still practiced in villages. In this tradition, families need to pay for elaborate funerary rites, and make continuing offerings to appease the dead spirits.

“…I kept saying ‘Who you worshiping, Brahma, Vishnu or Shiwa?’ And then they answered [Jiro Gde?]…So [Jiro?] means elevated, and Gde means the Great One. It’s a term that probably is more descended from the animist period in the worship of great nature spirits. So the next question is, ‘Why are you doing this ceremony?’ There response was also, like, ‘What do you ask such stupid questions for?’ I pressed them and pressed them. ‘Because, we always do it.’ It was, of course, part of an ancient religious cycle, and ceremonial cycle, ritual cycle, that had been going on for centuries, and nobody questioned the validity or reason. It was obligatory. It was you did it because you had to do it.”

“Another thing that many people don’t understand about the Balinese system of ancestor worship, which is also related to the tribal groups is that, the major purpose of cremation, and why cremations are joyous events, is to send off the spirit of the deceased to the land of the ancestors. And the reason you want to do it, because before you’ve successfully fulfilled this very important ritual in the human life cycle, their spirits hang around here and earth. And the longer and more dissatisfied they are, the more trouble they can bring…all kinds of bad things. So basically, you want to get rid of them. You want send them off in a glorious way so they’re happy.”

“And it doesn’t end there. It’s not like you just send them away. It’s like having somebody who becomes a member of Congress. You have a symbiotic relationship. And the symbiotic relationship is you constantly have to give offerings and the temple and do all sort of things. They become the representatives of the family here in the celestial realm, and because of them, they bring good luck and blessings and prevent disasters from happening. So, in a certain sense, it’s a payoff religion. And this is true of most of the traditional societies in Indonesia.”

“For instance, the cremation here. Before there was cremation–you can see it in Pejeng, an area near Ubud, where they have the most ancient bronze age stone sarcophagi–they used to bury them there. That’s the secondary burial. The first one is because cremation and secondary burials like the ones in [Taraja?] are extremely expensive. It can bankrupt families. You have to borrow money and they’re very, very demanding. Balinese religion is a really demanding religion. Bali has the highest rate of suicide in Indonesia, and it is because of the religion. They’re constantly having to borrow money; they’re running from one debt to another debt…” [45:40]

Surprisingly, I couldn’t find much about Balinese ancestor worship online, but one snippet I did find is below from a book called The Anthropological Romance of Bali:

Relatively corporate ancestor-groups are optional in Balinese social structure and are actualized by building a high-level (supra-household) temple, often complemented by making intratemple marriages – for example, father’s-brother’s daughter. As the congregation supporting an ancestor’s temple expands, genealogical connections become obscure: outsiders might even be admitted if costs and upkeep grow burdensome; traditions of an ideal descent line may, however, persist. Yet the social integration of the group rests more on its temple duties per se and marriages between its members. According to high-caste traditions the ideal conveyors of a group’s identity and status are eldest sons of eldest sons, especially if they are born of a marriage with a near patrikinswoman.

Emphasis on eldest lines is an optional aspect of Balinese descent. Rules for actual inheritance of house property range from primogeniture to ultimogeniture, and every son assumes particular ceremonial responsibilities for ancestral shrines according to the share of productive fields and other material wealth received after the father’s death.

It is in certain textual traditions – the special province of royal houses, but imitated by ascendant commoner groups – that emphasis falls on eldest sons. And eldest sons on the eldest agnatic line who is also the offspring of a patricousin marriage is enhanced in and of his descent; from birth he would be expected to be individually meritorious in keeping with this auspicious genealogy.

But occupants of the most highly regarded genealogical positions are not necessarily bearers of the most elaborate legends. Practical leadership of a group often falls to members not automatically qualified by descent. More pragmatic qualities take precedence, and the figures of actual leaders are them apt to be embellished, almost apologetically, with posthumous legends, stories, and anecdotes to show why it was – actual genealogical position notwithstanding – that they succeeded to leadership.

Compare this to various passages from The Ancient City giving a desciption of the Graeco-Roman veneration of the dead and the social organization that flowed from it:

The father ranks first in presence of the sacred fire. He lights it, and supports it; he is its priest. In all religious acts his functions are the highest; he slays the victim, his mouth pronounces the formula of prayer which is to draw upon him and his the protection of the gods. The family and the worship are perpetuated through him; he represents, himself alone, the whole series of ancestors, and from him are to proceed the entire series of descendants. Upon him rests the domestic worship. He can almost say, like the Hindu, “I am the god.” When death shall come, he will be a divine being whom his descendants will invoke. p. 69

[The] son had also his part in the worship; he filled a place in the religious ceremonies; his presence on certain days was so necessary that the Roman who had no son was forced to adopt a fictitious one for those days, in order that the rites be performed. And here religion established a very powerful bond between father and son. They believed in a second life in the tomb–a life happy and calm if the funeral repasts were regularly offered. Thus the father is convinced that his destiny after this life will depend on the care that his son will take care of his tomb, and the son, on his part, is convinced tat his father will become a god after death, who he will have to invoke…

The old religion established a difference between the older and the younger son. “The oldest,” said the ancient Aryas, “was begotten for the accomplishment of the duty due the ancestors; the others are the fruit of love.” In virtue of this original superiority, the oldest had the privilege, after the death of the father, of presiding at all the ceremonies of domestic worship; he it was who offered the funeral repast, and pronounced the formulas of prayer: “for the right of pronouncing the prayers belongs to that son who came into the world first.” The oldest was, therefore, heir to the hymns, the continuator of the worship, the religious chief of the family. From this creed flowed a rule of law: the oldest alone inherited property. Thus says an ancient passage, which the last editor of the Laws of Manu still inserted in the code: “The oldest takes possession of the whole patrimony, and the older brothers live under his authority as if they were under that of their father. The oldest son performs the duties towards the ancestors; he ought, therefore, to have all.”

Greek law is derived from the same religious beliefs as Hindu Law; it is not astonishing, then, to find here also the right of primogeniture. Sparta preserved it longer than other Greek cities, because the Spartans were longer faithful to old institutions; among them patrimony was indivisbile, and the younger brothers had no part of it. It was the same with many of the ancient codes that Aristotle had studied. He informs us, indeed, that the Theban code prescribed absolutely that the number of lots of land should remain unchangeable which certainly excluded the division among brothers. An ancient law of Corinth also provided that the number of families should remain invariable, which could only be the case where the right of the oldest prevented families from becoming dismembered in each generation…

Sometimes the younger son was adopted into a family, and inherited property there, sometimes he married an only daughter; sometimes, in fine, he received some extinct family’s lot of land. When all these resources failed, younger sons were sent out to join a colony. pp. 66-67

It is clearly evident that private property was an institution that the domestic religion had need of. This religion required that both dwellings and burying-places should be separate from each other; living in common was, therefore impossible. The same religion required that the hearth should be fixed to the soil, that the tomb should neither be destroyed nor displaced. Suppress the right of property, and the sacred fire would be without a fixed place, the families would become confounded, and the dead would be abandoned and without worship. By the stationary hearth and the permanent burial-place, the family took possession of the soil; the earth was in some sort imbued and penetrated by the religion of the hearth and of ancestors.

Thus the men of the early ages were saved the trouble of resolving too difficult a problem. Without discussion, without labor, without a shadow of hesitation, they arrived, at a single step and merely by virtue of their belief, at the conception of the right of property; this right from which all civilization springs, since by it man improves the soil and becomes improved himself. Religion, and not laws, first guaranteed the right of property. Every domain was under the eyes of household divinities, who watched over it…pp. 52-53

Thanks to the domestic religion, the family was a small organized body; a little society, which had its chiefs and its government. Nothing in modern society can give us an idea of this paternal authority. In primitive antiquity the father is not alone the strong man, the protector who has power to command obedience; he is the priest, he is heir to the hearth, the continuator of the ancestors, the parents stock of the descendents, the depository of the mysterious rites of the worship, and of the sacred formulas of prayer. The whole religion resides in him. p. 71


While each family had their own religion based on their ancestors, so too did each tribe have it own ancestral worship, leading to a sort of fractal, or recursive, organization of society around religion. In anthro jargon, these religions formed pantribal sodalities. Here is a description of the earliest forms of Chinese ancestor worship by Sir Leonard Wooley:

In the religions of latter-day China a very prominent part is played by ancestor worship. Since ancestor worship is wholly alien to Buddhism in its pure form as taught by Buddha, and since it is not included in the teaching (which is more philosophical than religious) of Lao Tzu, the founder of Taoism, its origin has to be sought elsewhere, and recent discoveries have proved that it is far older than any one of the systems which have been engrafted on it and must be accounted as a survival from the earliest days of Chinese civilization.

According to that belief a man’s real power began when he died. Death transformed the mortal man into a spirit, possessed of undefined but vast powers where his descendants were concerned. While not quite omniscient or omnipotent, the spirits could grant, or withhold, success in hunting, in agriculture, in war or in anything else, and they could punish those who failed to please them with famine, defeat, sickness or death; so awful were they that it was dangerous even to pronounce the personal names they had borne in life, and they were designated by their relationship and the day on which they were born or died, as “Grandfather Tuesday”, “Elder-brother Saturday”, and so on.

To the dead, then offerings had to be made, both at the time of burial and afterwards, so long as the family remained. The dead man, wrapped, apparently, in matting, was laid in the grave with such furniture as his relatives could afford–in the case of the very poor with a few pottery vessels and perhaps a bronze dagger-axe, while an official of high rank might have a profusion of beautifully cast decorated bronze vessels. These were genuine objects, not the crude copies which in later times were specifically manufactured for burial purposes, nor the flimsy paper imitations of still more recent days; the Shang people seem not to have evolved the idea that spirits can be satisfied as much by the ‘ghosts’ of things as by the things themselves; for them the spirits were real and the offerings made to them must be real also.

In the case of kings realism was carried to the farthest extent. A pit was dug which might be 60 feet square and over 40 feet deep, with on each side a sloped passage or stairway leading down from ground level. In the pit, and covering the greater part of its area, there was constructed a tomb-chamber of wood finely carved or adorned with designs in polychrome lacquer; in this was laid the body of the king, and in and around it an astonishing wealth of objects, including such things as chariots with their horses, the bodies of attendants, women wearing elaborate head-dresses of turquoise or soldiers with copper helmets; then the pit was filled with earth pounded into a sold mass as was done for house foundations, and in the filling more human victims were also buried, so that the total number might run into two or three hundred.

After this elaborate ritual of burial, which bears in details a remarkable resemblance to the Sumerian ritual of the Early Dynastic period and may, like the use of metal, be due to western influences, there was still need for the regularly current sacrifices which furnished nourishment for the dead and won their favourable response to prayer. The spirits of the ancestors dwelt with and were under the rule of Ti, the great god, and they acted as mediators and intercessors between him and their human descendants; prayers to the ancestors take the form of imploring them to ask god to do this or that.

This mediation would be forthcoming only if the spirits were satisfied by the proper offerings. The character of these can gathered from bone inscriptions. Drink offerings of spirituous liquor seem to have been the only product of the soil that was presented to the dead, or to the gods; of such things as bread or fruit there is no mention-in fact according to a story of the Chou period, when a high official directed in his will that during the first year after his death his favourite delicacy water-chestnuts, should be sacrificed to him, his strait-laced son decided that filial duty must give way to orthodox tradition and refused to carry out so irregular an order.

The normal sacrifices were of men and animals—Cattle, sheep, pigs, dogs, and occasionally horses and birds. The total number of victims sacrificed at a time was usually small, from one to ten; but for an important ceremony might be very large—one hundred cups of liquor, one hundred sheep and three hundred cattle’; and in several inscriptions a hundred and even three hundred human victims are mentioned. The human victims of a tomb sacrifice performed after the actual burial, either as the last act of the ceremony or at a later date, were decapitated and buried in pits, ten to a pit, sometimes with their hands tied behind their backs, furnished each with a uniform outfit of small bronze knives, axe-heads and grinding-stones, and their skulls were buried separately, in small square pits close by. With reference to these victims the bone inscriptions use different words: sometimes ‘men’, sometimes ‘captives’, but most often, and always where large numbers are concerned, ‘Ch’iang’ which, as written, combines the signs for ‘men’ and ‘sheep’ and is said to mean ‘barbarian shepherds of the West’.

All sacrifices other than those in the tombs of the kings were celebrated in temples, in ‘the House of the Spirits’. About the ritual very little is known. The liquor was poured out on the ground as a libation; animals, or special parts of the animals, were generally burnt by fire, but sometimes buried in the earth or thrown into water; the last two methods were employed for offerings to human ancestors, which the burnt offerings, according to the oracle bones, were destined for the gods; but how far this distinction really held good it is impossible to say, and it may even be that for the Shang people the distinction was too vague to be consistently observed.

…there were gods. Some of these were powers of nature or natural features; one oracle bone records ‘a burnt offering of four cattle to the sources of the Haan river’, the river on which the city Shang stood, perhaps an offering made because of drought such as that of c. 1190 BC when the river ceased to flow. The earth was a deity which later, and probably in Shang times also, was symbolized as an earthen mound (‘the Earth of the region’) piled up in the center of each village; possibly this is the ‘Queen Earth’ of after ages. Mention is made of the ‘Dragon Woman’ and of the ‘Eastern Mother’ and the ‘Western Mother’ and of the ‘Ruler of the [Four?] Quarters’; sacrifices are offered to the east, west and south, and to the wind, the ‘King wind’ and ‘the Wind, the Envoy of Ti’. Ti, or Shangti, ‘The Ruler Above’ seems to have been the chief god. He was specially concerned with war, and the king of Shang would not open a campaign without consulting Di; he was asked about the prospects of the year’s crops, he was one of the powers who could assure the sufficient rain, and generally he could allot good or bad fortune to men. War was, perhaps, his peculiar province, but his other attributes were shared by other gods and by the ancestors; at best he ranked as primus inter pares. It has, indeed, been suggested that he was himself but a deified ancestor, the progenitor of all the Shang kings, or that he embodies all the royal ancestry; that is possible, but the argument adduced in support of the theory, namely the fact that certain of the Shang kings bear such names as Ti I and Ti Hsin, could just as well be urged against it, seeing that theophoric names, i.e. names compounded with the name of the god, of the sort common in Sumer and in other lands of the ancient Middle East, imply the recognition of an already existing deity.

Both the gods and the ancestors existed; they had knowledge and they had power, power for good and for evil. The purpose of religion was therefore twofold: to secure by offerings the favour of the gods, so that they might grant to the suppliant not evil but good, and to wrest from the gods the knowledge that would guide his actions in this world. The sacrifices have been described; the knowledge was to be gained by divination.

One method of divination was, probably, by mediums, in Shang as in later days, but naturally no material evidence for that remains. The other method, for which we have evidence in plenty, was the interpretation of the cracks produced by heat in tortoise-shell or in bone. Of the two materials the former seems to have been the original and the most efficacious, for there were frequent references to consulting ‘the tortoise’, of ‘the Great Tortoise’, whereas bone is never mentioned as such. When, in 1395 BC, P’an Keng shifted his capital to Anyang he reminded his discontented subjects, ‘You did not presumptuously oppose the decision of the Tortoise’.

The questions are severely practical. Some deal with sacrifice, to whom it should be made—it was, of course, essential to find out which deity had to be propitiated—and when, and with what kind of offerings. A very common subject is war; the king enquires of the oracle when to declare war, how many men to engage, whether to attack or remain on the defensive, and what prospects there were of booty and prisoners? The crops–the outlook for each kind of grain and for the output of liquor; the weather, not only the general forecast but the immediate–‘Will it rain tonight?’ (and in a few cases we are given not only the official answer ‘No’ but the comment ‘It really didn’t rain!’); illness—will the patient recover?; dreams—does such and such a dream portend good or evil?; and the astrologer’s usual gambit, ‘Will next week be lucky or unlucky?’; and finally, and very often, ‘Will the Powers help?’ ‘Shall I receive aid?’ ‘Will the spirit of Grandfather aid the king?’ Such is the information that man in ancient China desired to obtain from the spirit world, and to obtain it was the whole purpose of religion.


This organization not only provided the social contract, but, as noted above, the notion of private property. Each family required it’s own ancestral tome and sacred hearth. It therefore had it own land, owned not by individuals, but by joint families. Some societies had preserved this organization into modern times. In his book Primitive Property, Lavaleye looks at the village communities of India and Java for a model of how primitive communities arranged their economic relations such as land ownership:

In some remote regions the most archaic form of community is to be found, of which ancient authors make such frequent mention. The land is cultivated in common, and the produce divided among all the inhabitants. At the present time, however, collectivity no longer exists generally, except on the joint-family. This family community still exists almost everywhere, with the same features as the zadruga of the Southern Slavs.

Each family is governed by a patriarch, exercising despotic authority. The village is administered by a chief, sometimes elected, sometimes hereditary. In the villages where the ancient customs have been maintained, the authority belongs to a council, which is regarded as representing the inhabitants. The most necessary trades, such as those of the smith, the currier, the shoemaker, the functions of the priest and the accountant, devolve hereditarily in certain families, who have a portion of the land allotted to them by way of fee…In England, there are numerous traces to show that a custom formerly existed there exactly similar to that practised in India, a remarkable instance of the persistence of certain institutions in spite of time and national migrations.

This intimate association which forms the Hindu village rests even at the present day on family sentiment; for the tradition, or at least the idea, prevails among the inhabitants of descent from a common ancestor: hence arises the very general prohibition against land being sold to a stranger. Although private property is now recognized, the village, in its corporate capacity, still retains a sort of eminent domain. Testamentary disposition was not in use among the Hindus any more than among the Germans or the Celts. In a system of community there was no place for succession or for legacies. When, in later times, individual property was introduced, the transmission of property was regulated by custom.

As Sir H. Maine remarks, in the natural association of the primitive village, economical and juridical relations are much simpler than in the social condition, of which a picture has been preserved to us in the old Roman law and the law of the Twelve Tables Land is neither sold, leased, nor devised. Contracts are almost entirely unknown. The loan of money for interest has not been thought of. Commodities only are the subject of ordinary transaction, and in these the great economic law of supply and demand has little room for action. Competition is unknown, and prices are determined by custom. The rule, universal with us, of selling in the dearest market possible and buying in the cheapest, cannot even be understood. Every village and almost every family is self-sufficient. Produce hardly takes the form of merchandise destined for exchange, except when sent to the sovereign as taxes or rent. Human existence almost resembles that of the vegetable world, it is so simple and regular.

In the dessa of Java, and in the Russian mir, we can grasp, in living form, civilization in its earliest stage, when the agricultural system takes the place of the nomadic and pastoral system. The Hindu village has already abandoned community, but it still retains numerous traces of it. In its relations with the state, the village is regarded as a jointly responsible corporation. The state looks to this corporation for the assessment and levying of imposts, and not to the individual contributor…The village owns the forest and uncultivated land, as undivided property, in which all the inhabitants have a right of enjoyment. As a rule, the arable land is no longer common property, as in Java or in Germany in the days of Tacitus. The lots belong to the families in private ownership, but they have to be cultivated according to certain traditional rules which are binding on all.

It appears that cultures like Bali, Java, India, China and the Graeco-Roman world had two distinct religions. The older one was the veneration of one’s ancestors centered around the domestic temple or hearth, and based on the ongoing maintenance of the relationship with the dead–the ceremonial offerings; the funerary repasts; the sacrificial rites; burial practices; and so on. The other was a broader public worship based in temples and mediated by a professional class of priests, of a pantheon of Major Deities connected to nature or the stars. It was this latter worship, de Coulanges attests, that allowed the ancient city-states to form.

We are correct, therefore, in saying that this second religion was at first in unison with the social condition of men. It was cradled in each family, and remained long bounded by this narrow horizon. But it lent itself more easily than the worship of the dead to the future progress of human association. Indeed, the ancestors, heroes, and manes were gods who by their very nature could be adored only by a very small number of men, and who thus established a perpetual and impassable line of demarcation between families.

The religion of the gods of nature was more comprehensive. No rigorous laws opposed the propagation of the worship of any of these gods. There was nothing in their nature that required them to be adored by one family only, and to repel the stranger. Finally, men must have come insensibly to perceive that the Jupiter of one family was really the same being or the same conception as the Jupiter of another, which they would never believe of two Lares, two ancestors, or two sacred fires.

Let us add, that the morality of this new religion was different. It was not confined to teaching men family duties. Jupiter was the god of hospitality; in his name came strangers, suppliants, “the venerable poor,” those who were to be treated “as brothers.” All these gods often assumed the human form, and appeared among mortals; sometimes, indeed, to assist in their struggles and to take part in their combats; often, also, to enjoin concord, and to teach them to help each other.

As this second religion continued to develop, society must have enlarged. Now, it is quite evident that this religion, feeble at first, afterwards assumed large proportions. In the beginning it was, so to speak, sheltered under the protection of its elder sister, near the domestic hearth. There the god had obtained a small place, a narrow cella, near and opposite to the venerated altar, in order that a little of the respect which men had for the sacred fire might be shared by him. Little by little, the god, gaining more authority over the soul, renounced this sort of guardianship, and left the domestic hearth. He had a dwelling of his own, and his own sacrifices. This dwelling (ναος, from virago, to inhabit) was, moreover, built after the fashion of the ancient sanctuary; it was, as before, a cella opposite the hearth; but the cella was enlarged and embellished, and became a temple. The holy fire remained at the entrance of the god’s house, but appeared very small by the side of this house. What had at first been the principal, had now become only an accessory. It ceased to be a god, and descended to the rank of the god’s altar, an instrument for the sacrifice. Its office was to burn the flesh of the victim, and to carry the offering with men’s prayers to the majestic divinity whose statue resided in the temple.

When we see these temples rise and open their doors to the multitude of worshipers, we may be assured that human associations have become enlarged… pp. 103-104

Why so many gods? I found this article talking about Hinduism–the largest living polytheistic religion–to give a good explanation. Even the spirit world apparently requires bureaucracy and middle management!:

…For a country, state, or city to run properly, the government creates various departments and employs individuals within those departments — teachers, postal workers, police and military personnel, construction works, doctors, politicians, and so many more. Each of these departments employs hundreds or thousands of individuals carrying out their respective duties and each sector has an individual or multiple individuals that oversees the activities of that one unit. Each head of an area is endowed with certain privileges and powers which facilitates them in their tasks. It’s safe to say that the number of individuals working for the United States government goes into the millions. This is just to keep one country working. Multiply that by all the countries on the planet, which is around 200, and all the people working for these governments, the total would easily come out to tens of millions of people employed by the various governments of the world to run one planet.

The way it’s explained is that in order to keep the universe running, Krishna, the supreme being, has put into place individuals that oversee different parts of the material universe. These individuals are powerful beings that have been appointed by Krishna and have been bestowed with the necessary powers and abilities to manage and govern their area of creation. They can be referred to as demigods. For example, there is someone responsible for the sun and his name is Surya. The goddess Saraswati is the overseer of knowledge. The creator of the material universe is known as Brahma. The destruction of the universe is overseen by Shiva and Vishnu serves as the maintainer. There are individuals overseeing the oceans, the wind, and practically every facet of creations. When seen from this perspective, 33 million is not that big a number.

The 33 Million Gods of Hinduism (Huffington Post)

Because the pantheon of gods was not associated with a specific family, unlike the ancestral deities or protector spirits, worship was open to all. This allowed larger associations to form.

de Coulanges goes on to describe how each city had its own patron god or goddess who watched over and protected their city. In this way, they were quite similar to Babylonian cities, which were also based around the worship of a particular tutelary deity (Marduk with Babylon, Ashur with Assur, Enlil with Nippur, Ishtar with Arbela, etc.). The relationship of the citizens of the polis was the same as that of the corporate family writ large. The sacred worship of the ancestors was transferred to the city’s patron god/goddess. The demos was a kind of congregation, united in worship. It is only in this context that the institution of the ancient city can be fundamentally understood, argued de Coulanges.

As Michael Hudson has argued, cities themselves were established from earlier sacred sites which date back to prehsitoric meeting places of sacred congregation and feasting. For example, it has recently been discovered that Stonehenge was a site of ritual feasting for inhabitants from the distant corners of the British Isles. As Hudson writes, “The earliest urban sites were sanctified, commercial, peaceful, and often multiethnic.”

The multiethnic character of southern Mesopotamian cities (and others as well) led them to formalize rituals of social integration to create a synthetic affinity. Urban cults were structured to resemble the family ‑‑ a public family or corporate body with its own foundation story such as that of Abraham of Ur for the Jews, or heroic myths for Greek cities. Over these families stood the temples, “households of the gods,” whose patron deities were manifestations of a common prototype and given local genealogies.

Assyriologists have noted that early Mesopotamian rulers downplayed their family identity by representing their lineage as deriving from the city‑temple deities. Sargon of Akkad, often taken as a prototype for the myth of the birth of royal heroes (including Moses and Romulus) emphasized his “public family.” In any event archaic clan groupings seem to have been relatively open to newcomers. There is little Bronze Age evidence for closed aristocracies of the sort found in classical antiquity. Mesopotamia seems to have remained open and ethnically mixed for thousands of years, and the Sumerians probably incorporated strangers as freely as did medieval Irish feins and many modern tribal communities…

Even as cities became more secular in classical times, their administrative focus remained shaped to a large extent by sacred rituals. Town planners were augurs, more concerned with reading omens than with the more pragmatic aspects of city planning. In an epoch when medicine was ritualistic and doctors often were in the character of shamans, the idea of promoting health was to perform proper rituals at the city’s foundation rather than to place cities on slopes for good drainage. (This is why it was considered auspicious to build Rome around the mosquito‑ridden Forum.) Material considerations were incorporated to the extent that they could be reconciled with the guiding social cosmology.

Many millennia were required before a common body of law came to govern the city and the land, temples and palaces in a single code. Polis-type cities and their law codes combining hitherto separate public and private, sacred and secular functions were relatively late. And when such cities arose, in classical times, they had become much more genetically closed than was the case in archaic towns.

However, the citizens of the Polis were still simultaneous members of multiple, overlapping sodalities—clans, tribes, phratries, neighborhoods, genē, and so on. Yet each association was based around religion. Some associations were by birth and others were by choice. At different points in their lives, people became members of these multiple overlapping social associations and cults:

From the tribe men passed to the city; but the tribe was no dissolved on that account, and each of them continued to form a body, very much as if the city had not existed. In religion there subsisted a multitude of subordinate worships, above which was established one common to all; in politics, numerous little governments continued to act, while above them a common government was founded…

Thus the city was not an assemblage of individuals; it was a confederation of several groups, which were established before it, and which it permitted to remain. We see, in the Athenian orators, that every Athenian formed a portion of four distinct societies at the same time; he was a member of of a family, of a phratry, or a tribe, and of a city. He did not enter at the same time and the same day into all these four, like a Frenchman, who at the moment of this birth belongs at once to a family, a commune, a department, and a country. The phratry and the tribe are not administrative divisions. A man enters at different times into these four societies, and ascends, so to speak, from one to the other. First, the child is admitted into the family be the religious ceremony, which takes place six days after his birth. Some years later he enters the phratry by a new ceremony, which we have already described. Finally, at the age of sixteen or eighteen, he is presented for admission into the city.

On that day, in the presence of an altar, and before the smoking flesh of a victim, he pronounces an oath, by which he binds himself, among other things, always to respect the religion of the city. From that day he is initiated into the public worship, and becomes a citizen. If we observe this young Athenian rising, step by step, from worship to worship, we have a symbol of degrees through which human association has passed. The course which this young man is constrained to follow is that which society first followed. Ancient City: p. 104-106

It was this worship mediated by priests and based in temples that allowed for greater levels of social complexity than tribal groupings. Everywhere where an organized, professional bureaucratic priesthood emerged we see a scaling up of social complexity and the emergence of permanent status hierarchies. Certain families are ranked higher than others, either by an ability to mediate with transcendent deities or through descent from a particularly prestigious ancestor. Often the head of this lineage becomes the first de facto ruler. And there is always a connection between the priesthood and the political ruing class. Sometimes they are one in the same, as in a theocracy. Other times they are ideological allies, with the secular authority in the driver’s seat (called Caesaropapism). Since the priests are mediators between men and the gods, their services are essential—not to mention expensive. We’ve previously shown that the donations to the priestly class (as described in Leviticus, for example), were the origin of taxation. And the need to assess these donations against one another was the impetus for the development of money, originating as a system of measurement. Thus, primitive general-purpose money was always and everywhere associated with priests, kings, and temples.

The Egyptologist John Henry wrote an account of the Egyptian religion, and how it changed over time necessitating the development of money as a priestly cult, centered around astrology and the gods formed:

Tribal societies practised magic in which the community exercised a collective relationship with their deceased ancestors who were believed to inhabit a spirit world that was part of nature. The deceased were to continue to fulfill their social obligations by communicating tribal commands to those forces of nature which could not be understood by per-scientific populations.

Totemism differs from mature religion in that no prayers are used, only commands. The worshipers impose their will on the totem by the compelling force of magic, and this principle of collective compulsion corresponds to a state of society in which the community is supreme over each and all of its members … the more advanced forms of worship, characteristic of what we call religion, presupposed surplus production, which makes it possible for the few to live off the labour of the many.

The king had been chosen and approved by the gods and after his death he retired into their company. Contact with the gods, achieved through ritual, was his prerogative, although for practical purposes the more mundane elements were delegated to priests. For the people of Egypt, their king was a guarantor of the continued orderly running of their world: the regular change of seasons, the return of the annual inundation of the Nile, and the predictable movements of the heavenly bodies, but also safety from the threatening force of nature as well as enemies outside Egypt’s borders.

Signifying the new state of affairs was the temple which was not only ‘…an architectural expression of royal power, it was for them a model of the cosmos in miniature’. And, while the pharaohs were careful not to supplant the clan (magic) cults with the new centralized religion (until the ill-fated experiment of Akhetaten, that is), the pharaoh became ‘…theoretically, the chief priest of every cult in the land’.

The state religion was structured around Re and Osiris, emphasizing continual renewal in a never-ending cycle of repetition. The ideological thrust was one of permanence and long-standing tradition. This, even as change took place and fundamental political innovations were introduced, ‘…(the) tendency for Egyptian kings (was) not to emphasize what innovations they were instituting, but rather to stress how they were following long traditions…’

Essentially, the spirit world was converted to one of gods, and the control of nature, previously seen as a generally sympathetic force, was now in the hands of priests. Nature itself became hostile and its forces, controlled by gods, required pacification through offerings. The king–the “one true priest”–and the priests placed themselves as the central unifying force around which continued economic success depended. In so doing, they could maintain the flow of resources that provided their enormously high levels of conspicuous consumption and wasteful expenditures that certified their status as envoys to the natural world.

Under the new social organization, tribal obligations were converted into levies (or taxes, if one views this term broadly enough). The economic unit taxed was not the individual but the village… Wray, et. al.; Credit and State Theories of Money: pp. 89-91

It’s also interesting that these ancestral death cults did not contain any kind of moral code, which became so central to later religions, including the ones most people follow today. They also had no creeds or dogmas. Here isEvans Pritchard, again:

To understand … primitive religion in general, …we have to note that he held that early religions lacked creeds and dogmas: ‘they consisted entirely of institutions and practices.’ Rites, it is true, were connected with myths, but myths do not, for us, explain rites; rather the rites explain the myths. If this is so, then we must seek for an understanding of primitive religion in its ritual, and, since the basic rite in ancient religion is that of sacrifice, we must seek for it in the sacrificium; and further, since sacrifice is so general an institution, we must look for its origin in general causes.

Fundamentally, Fustel de Coulanges and Robertson Smith were putting forward what might be called a structural theory of the genesis of religion, that it arises out of the very nature of primitive society. This was also Durkheim’s approach, and he proposed to show in addition manner in which religion was generated. The position of Durkheim…can only be appraised if two points are kept in mind.

The first is that for him religion is a social, that is an objective, fact. For theories which tried to explain it in terms of individual psychology he expressed contempt. How, he asked, if religion originated in a mere mistake, an illusion, a kind of hallucination, could it have been so universal and so enduring, and how could a vain fantasy have produced law, science, and morals?

And that is exactly the fundamental question I have.

Shifting the Overton Window

“All truth passes through three stages: First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as self-evident.”
–Arthur Schopenhauer

When I started this blog way way back in 2011 (whoa, has it been that long‽), one of the first things I wrote about was a series of lengthy posts about automation: What Are People Good For? I returned to that topic frequently over the years, although not so much lately. For instance, here’s an oldie from 2011: Job Myths & Realities. Here’s another: The New York Times Discovers the Jobless Future.

So, here in 2019, it’s surreal to see everything I said back then going mainstream. At least that’s what I think when I listen to independent presidential candidate Andrew Yang, who has been saying pretty much everything I said back then.

Here’s one from 2016: Automation and the Future of Work: It’s Already Happened. I discussed the effect on the African-American community with Automation and the Future of Work: Black Lives Matter

Now, I don’t agree with everything he’s saying about solutions. I have certain problems with UBI, and I have different ideas about the best solutions, but that’s a topic for another post. However, it is nice to hear someone talking about these problems rather than the usual “let them eat training.” Until now, elites have stubbornly stuck to the idea that the deindustrialization worked out great for everyone, and resisted any idea that vast swaths of America have been reduced to third-world living standards outside of a handful of elite citadels and gated suburbs. The anger in the Midwest and the Heartland came as a shock to the Neoliberal cloistered class when that anger led to throwing a monkey wrench into the gears of collective governance by knowingly electing an incompetent proto-Facist grifter for president. “How could this happen?” the elites wondered? They must just all be racists.

I’ve been listening to an old interview between Yang and Ezra Klein. Klein is the poster child for the kind of coastal-dwelling hyperprivileged credential-class elite that lives in a permanent bubble. I get the feeling he’s never even been to flyover country, and would probably be more at home in downtown Kuala Lumpur than he would be in my location in Milwaukee. I’m sure Midwesterners would be as exotic to him as the headhunting highlanders of New Guinea. In the interview with Yang, he rolls out every trope in the book to deny that there’s any sort of problem with jobs or automation, including the hoary old, “we all used to be farmers, and now look at us!” trope. Get better journalists.
Last Week Tonight with John Oliver also did a terrible job, rolling out the typical lazy-thinking and specious arguments against the impact of automation and deindustrialization. Sometimes YouTube comments are intelligent:

For a much better discussion, here’s a clip from the Sam Harris interview (YouTube)

I also wrote way back then about how meritocracy is a sham: Thoughts on Meritocracy. People may have thought I was harsh or talking out of my ass back then, but with the recent college admissions scandal (“Varsity Blues”), I don’t think people are as sold on the idea of meritocracy anymore. Once again, the emperor has no clothes. I think the reason that this incident worried the powers that be is that it strikes directly at the myths that are used to justify the obscene inequalities we see today.

The other “outside” topic I’ve written about over the years has been what’s often referred to as Modern Monetary Theory, or Functional Finance. Well, that used to be way out there. But now it’s mainstream enough now to engender attacks from the press. One was via Paul Krugman at the New York Times, and another from the Socialist magazine Jacobin.

Economists who have developed the MMT paradigm, especially Stephanie Kelton, Randall Wray and Pavlina Tchernva, have responded vigorously. One again, wherever you fall on this topic, I think we can agree that this debate is finally . Tcherneva responded to the attack on MMT by Doug Henwood with a piece of her own for Jacobin: MMT Is Already Helping. Incidentally, much of my writing on economic history has been informed and inspired directly by their publications.

I can’t keep track of all of these, but interfluidity has a good roundup of the MMT Wars: MMT streetfighting

Three levels of controversy over MMT (interfluidity)

Bill Black: MMT Takes Center Stage – and Orthodox Economists Freak (Naked Capitalism)

MMT is Politically Open and Applicable to Both Capitalism and Socialism (Heteconomist)

What’s wrong with MMT? (Medium)

Another fictional characterisation of MMT finishes in total confusion (Billyblog)

I think it’s pretty clear that we’ve tentatively moved into the “violent opposition” phase. And that’s the best news I’ve heard in a while. I don’t often toot my own horn or pat myself on the back (it’s not in my nature), but I hope you’ll permit me a modicum of self-congratulation that the topics that this little blog have dealt with over the years are finally being discussed in mainstream media venues.

EDIT: More good news: apparently Chicago elected six Democratic Socialists to their city council (The Guardian). Kind of ironic that they’re pulling ahead of us here in Milwaukee, where we were run by Socialist Party mayors until 1960.

Fun Facts March 2019

Sodium Citrate is the secret ingredient to making nacho cheese sauce. Coincidentally, Sodium Citrate’s chemical formula is Na3C6H5O7 (NaCHO)
Cook’s Illustrated Explains: Sodium Citrate (Cook’s Illustrated)

According to the FBI there are 300 times more impostor Navy SEALs than actual SEALs
Don Shipley (Navy SEAL) (Wikipedia)

You were more likely to get a job if you had smallpox scars in the 18th century. The scars proved that you already had smallpox and could not pass it on to your employers.
(Reddit)

1,500 private flew into Davos in 2019
1,500 private jets coming to Davos (BoingBoing)

According to US Customs and Border Protection, border crossings of Mexican and Central American refugees ranged from 20,000 to roughly 60,000 people per month in 2018. In Los Algodones [Mexico] alone, nearly five times as many American dental refugees are going the opposite way. To get an idea of the absurdity, one could argue there are more people currently fleeing the US’s health care system than refugees seeking asylum from extreme violence and state terror in Central America.
Millions of Americans Flood Into Mexico for Health Care — the Human Caravan You Haven’t Heard About (Truthout) similarly:

The U.S. government estimates that close to 1 million people in California alone cross to Mexico annually for health care, including to buy prescription drugs. And between 150,000 and 320,000 Americans list health care as a reason for traveling abroad each year. Cost savings is the most commonly cited reason.
American Travelers Seek Cheaper Prescription Drugs In Mexico And Beyond (NPR). Who’s the Third World country now???

Virginia students learn in trailers while state offers Amazon huge tax breaks (The Guardian)

The term “litterbug” was popularized by Keep America Beautiful, which was created by “beer, beer cans, bottles, soft drinks, candy, cigarettes” manufacturers to shift public debate away from radical legislation to control the amount of waste these companies were (and still are) putting out.
A Beautiful (If Evil) Strategy (Plastic Pollution Coalition)

Americans Got 26.3 Billion Robocalls Last Year, Up 46 Percent From 2017.
https://www.washingtonpost.com/technology/2019/01/29/report-americans-got-billion-robocalls-last-year-up-percent/

Over the past 20 years, more than $7 billion in public money has gone toward financing the construction and renovation of NFL football stadiums.
Why do taxpayers pay billions for football stadiums? (Vox)

San Francisco has more drug addicts than it has students enrolled in its public high schools.
https://marginalrevolution.com/marginalrevolution/2019/02/san-francisco-fact-of-the-day.html

By 2025, deaths from illicit opioid abuse are expected to skyrocket by 147%, up from 2015. Between 2015 and 2025, around 700,000 people are projected to die from an opioid overdose, and 80% of these will be caused by illicit opioids such as heroin and fentanyl. (in other words, everything is going according to plan)
https://www.upi.com/Health_News/2019/02/01/Study-Illicit-opioid-deaths-to-rise-by-147-percent-by-2025/3961549026251/

35% of the decline in fertility between 2007 and 2016 can be explained by declines in births that were likely unintended, and that this is driven by drops in births to young women.
https://www.nber.org/papers/w25521

In 1853, not many Americans worked in an office. Even as late as the 1880s, fewer than 5 percent of Americans were involved in clerical work.
The Open Office and the Spirit of Capitalism (American Affairs)

About 40% of young adults cannot afford to buy one of the cheapest homes in their area in the UK, with the average deposit now standing at about £26,000
Young people living in vans, tiny homes and containers (BBC)

Terror attacks by Muslims receive an average of 357 percent more media coverage than those by other groups. (Newsweek). Maybe the New Zealand mosque shooting will change that.

One-third of the billions of dollars [GoFundMe] has raised since its inception went toward somebody’s medical expenses.
US Healthcare Disgrace: GoFundMe-Care Symptomatic of Extreme Inequality (Who. What. Why)

40% of police officer families experience domestic violence, in contrast to 10% of families in the general population.
http://womenandpolicing.com/violencefs.asp

After water, concrete is the most widely used substance on Earth. If the cement industry were a country, it would be the third largest carbon dioxide emitter in the world with up to 2.8bn tonnes, surpassed only by China and the US.
Concrete: the most destructive material on Earth (The Guardian)

Rural areas have not even recovered the jobs they lost in the recession….Suicide rates are on the rise across the nation but nowhere more so than in rural counties.
Two-Thirds of Rural Counties Have Fewer Jobs Today Than in 2007 (Daily Yonder)

Mapping the rising tide of suicide across the United States (Washington Post). According to plan…

On any given day, 37 percent of American adults eat fast food. For those between 20 and 39 years old, the number goes up to 45 percent – meaning that almost half of younger adults are eating fast food daily.
4 troubling ways fast food has changed in 30 years (Treehugger)

Global investors dumped $4.2 billion into companies working on self-driving cars (or autonomous vehicles, AVs) in the first 3 quarters of 2018.
In Praise of Dumb Transportation (Treehugger)

In the early Middle Ages, nearly one out of every thousand people in the world lived in Angkor, the sprawling capital of the Khmer Empire in present-day Cambodia.
The city of Angkor died a slow death (Ars Technica)

Neanderthals are depicted as degenerate and slouching because the first Neanderthal skeleton found happened to be arthritic.
20 Things You didn’t Know About Neanderthals (Discover)

There were more than twice as many suicides (44,193) in the US in 2018 as there were homicides (17,793)
College Dreams Dashed (Psychology Today)

Adolescents are more likely to feel depressed and self-harm, and are less likely to get a full night’s sleep, than 10 years ago.
Adolescent health: Teens ‘more depressed and sleeping less’ (BBC)

When his eight years as President of the United States ended on January 20, 1953, private citizen Harry Truman took the train home to Independence, Missouri, mingling with other passengers along the way. He had no secret service protection. His only income was an Army pension. (Reddit)

Khoisan people of South Africa were once the most populous humans on Earth. (Ancient Origins)

[T]he contribution of top firms to US productivity growth has dropped by over 40 percent since 2000. [If] in the 1960s you were to double the productivity of GM, that would clearly have a huge impact on the economy. If you were to double the productivity of Facebook overnight, it wouldn’t even move the needle – you would get slightly better targeted ads, but zero impact on the economy.
The “Biggest Puzzle in Economics”: Why the “Superstar Economy” Lacks Any Actual Superstars (ProMarket)

Almost half of new cancer patients lose their entire life savings. (Insider)

The son of a US Governor is 6,000 times more likely to become a Governor than the average American and the son of a US Senator is 8,500 times more likely to become a senator than the average American. (Reddit)

From 1987 until 2011-12—the most recent academic year for which comparable figures are available—universities and colleges collectively added 517,636 administrators and professional employees…

Part-time faculty and teaching assistants now account for half of instructional staffs at colleges and universities, up from one-third in 1987. During the same period, the number of administrators and professional staff has more than doubled. That’s a rate of increase more than twice as fast as the growth in the number of students.
New Analysis Shows Problematic Boom In Higher Ed Administrators (Huffington Post)

From 2009 to 2017, major depression among 20- to 21-year-olds more than doubled, rising from 7 percent to 15 percent. Depression surged 69 percent among 16- to 17-year-olds. Serious psychological distress, which includes feelings of anxiety and hopelessness, jumped 71 percent among 18- to 25-year-olds from 2008 to 2017. Twice as many 22- to 23-year-olds attempted suicide in 2017 compared with 2008, and 55 percent more had suicidal thoughts. The increases were more pronounced among girls and young women. By 2017, one out of five 12- to 17-year-old girls had experienced major depression in the previous year.
The mental health crisis among America’s youth is real – and staggering (The Conversation)

Infectious diseases that ravaged populations in the Middle Ages are resurging in California and around the country, especially in homeless encampments.
“Medieval” Diseases Flare as Unsanitary Living Conditions Proliferate (Truthout) Who’s the Third World Country? Repeat after me, “according to plan…”

Benjamin Franklin chose never to patent any of his inventions or register any copyright (SmallBusiness.com)

I think it’s time to get the hell out of here:

Rhapsody on Blue

A few years ago, a photograph went “viral” on the internet. It was just a simple picture of a dress. What was so compelling about it?


Well, what was so incredible about this particular photo was that nobody could agree about what color it was. Some people said it was white with gold stripes. Others insisted, just as firmly, that it was blue with black stripes (which is what I saw). As the BBC reported, even Kim and Kanye couldn’t agree, but decided to stay together for the sake of the money and the fame.

Why everyone is asking: What colour is this dress?’ (BBC)

White & Gold or Blue & Black? Science of the Mystery Dress (Live Science)

Relevant xkcd: https://xkcd.com/1492/

This brings to mind an old adage I head a long time ago: “You don’t see with your eyes. You see with your brain with the help of your eyes.”

And that simple, yet profound, distinction makes all the difference. Once you grasp that, a lot of these ideas begin falling into place.

For another example somewhat more pertinent to our discussion of auditory hallucinations, a sound clip went viral in much the same way. When the clip was played, some people heard the name “Laurel”. Others insisted that what the clip really said was “Yanny”. As one researcher said of these illusions, “All of this goes to highlight just how much the brain is an active interpreter of sensory input, and thus that the external world is less objective than we like to believe.”

‘Yanny’ or ‘Laurel’? Why Your Brain Hears One or the Other in This Maddening Illusion (Live Science)

Of course, the ultimate reason for the illusion was exactly the same: You don’t hear with your ears. You hear with your brain with the help of your ears.

Now, you need to keep this in mind with the discussion we’re about to have.

We’ve talked previously about how metaphor, analogy, language, and culture shape our perceptions of the world around us. It turns out that numerous studies have confirmed that the classification schemes, metaphors, models, and language that we use colors our perception of the so-called “objective” world. And ‘colors’ turns out to be an apt word.

For example, many cultures around the world do not make a distinction between the colors blue and green. That is, they don’t actually have a for ‘blue’; rather blue and green are classified as different shades of the same color. In fact, 68 languages use green-or-blue (grue) words compared to only 30 languages that use distinct words for green and blue.This does not mean that people in these cultures literally cannot ‘see’ the color blue, as if they perceived it as another color, or as somehow invisible (color perception is created by light wavelengths striking cone cells on the retina). Rather, they simply felt that no special distinction needed to be made between these colors in the language.

It turns out that this actually affects how such cultures perceive the world around them. The Himba (whom we mentioned previously) also do not make a distinction. When given a task of identifying which shades of blue and green were different, they were slower than cultures which do make such a distinction. By contrast, they do differentiate multiple shades of green, and were able to identify a different shade of green faster than people in cultures who do not make such a distinction (such as ours).

…there’s actually evidence that, until modern times, humans didn’t actually see the colour blue…the evidence dates all the way back to the 1800s. That’s when scholar William Gladstone – who later went on to be the Prime Minister of Great Britain – noticed that, in the Odyssey, Homer describes the ocean as “wine-dark” and other strange hues, but he never uses the word ‘blue’.

A few years later, a philologist (someone who studies language and words) called Lazarus Geiger decided to follow up on this observation, and analysed ancient Icelandic, Hindu, Chinese, Arabic, and Hebrew texts to see if they used the colour. He found no mention of the word blue.

When you think about it, it’s not that crazy. Other than the sky, there isn’t really much in nature that is inherently a vibrant blue.

In fact, the first society to have a word for the colour blue was the Egyptians, the only culture that could produce blue dyes. From then, it seems that awareness of the colour spread throughout the modern world…Another study by MIT scientists in 2007 showed that native Russian speakers, who don’t have one single word for blue, but instead have a word for light blue (goluboy) [голубой] and dark blue (siniy) [синий], can discriminate between light and dark shades of blue much faster than English speakers.

This all suggests that, until they had a word from it, it’s likely that our ancestors didn’t actually see blue. Or, more accurately, they probably saw it as we do now, but they never really noticed it…

There’s Evidence Humans Didn’t Actually See Blue Until Modern Times (Science Alert – note the title is misleading)

In fact, the way color is described throughout the Iliad is distinctly odd, a fact that scholars have long noted:

Homer’s descriptions of color in The Iliad and The Odyssey, taken literally, paint an almost psychedelic landscape: in addition to the sea, sheep were also the color of wine; honey was green, as were the fear-filled faces of men; and the sky is often described as bronze.

It gets stranger. Not only was Homer’s palette limited to only five colors (metallics, black, white, yellow-green, and red), but a prominent philosopher even centuries later, Empedocles, believed that all color was limited to four categories: white/light, dark/black, red, and yellow. Xenophanes, another philosopher, described the rainbow as having but three bands of color: porphyra (dark purple), khloros, and erythros (red).

The Wine-Dark Sea: Color and Perception in the Ancient World (Clarkesworld Magazine)

Perhaps the blind poet was, indeed, tripping. But the ancient Greeks were hardly alone in their unusual description of colors:

The conspicuous absence of blue is not limited to the Greeks. The color “blue” appears not once in the New Testament, and its appearance in the Torah is questioned (there are two words argued to be types of blue, sappir and tekeleth, but the latter appears to be arguably purple, and neither color is used, for instance, to describe the sky). Ancient Japanese used the same word for blue and green (青 Ao), and even modern Japanese describes, for instance, thriving trees as being “very blue,” retaining this artifact (青々とした: meaning “lush” or “abundant”).

It turns out that the appearance of color in ancient texts, while also reasonably paralleling the frequency of colors that can be found in nature (blue and purple are very rare, red is quite frequent, and greens and browns are everywhere), tends to happen in the same sequence regardless of civilization: red : ochre : green : violet : yellow—and eventually, at least with the Egyptians and Byzantines, blue.

The Wine-Dark Sea: Color and Perception in the Ancient World (Clarkesworld Magazine)

Of course, biology has a role to play here too. If someone is red/green color blind, which about 1 in 10 men are, they will make no differentiation between red and green. Nor will they be able to adequately describe what they are seeing to those of us who are not color-blind.

I always remember a discussion I had many years ago with a friend of mine who was color-blind (the one who drowned, incidentally). I asked him if he saw red and green as both red or both green. Here’s what he told me: “They’re the same.”

Me:‘The same’ as in they’re both red, or ‘the same’ as in they’re both green?”

Him: Neither. They’re just the same.

Me: So…they’re both gray then? No color at all.

Him: No, it’s not gray. It’s a color.

Me: Okay, which color? Red or green?

Him: Neither.

Me: How can it be neither? It has to be a color. Which color is it, red or green? Or some other color?

Him: I don’t know. they’re just…the same.

And on and on we went…

The Radiolab podcast did a whole episode on the topic which is worth a listen: Why the sky isn’t blue (Radiolab)

And a video explanation: The Invention Of Blue (YouTube)

The World Atlas of Language Structures Online has an entire entry devoted to terms for Green and Blue that is worth reading. https://wals.info/chapter/134

This post: Blue on Blue goes into this topic in exhaustive detail.

Perception is as much cognition as sensation. Colors don’t exist in the world. It is our brain’s way of processing light waves detected by the eyes. Someone unable to see from birth will never be able to see normal colors, even if they gain sight as an adult. The brain has to learn how to see the world and that is a process that primarily happens in infancy and childhood.

Radical questions follow from this insight. Do we experience blue, forgiveness, individuality, etc. before our culture has the language for it? And, conversely, does the language we use and how we use it indicate our actual experience? Or does it filter and shape it? Did the ancients lack not only perceived blueness but also individuated/interiorized consciousness and artistic perspective because they had no way of communicating and expressing it? If they possessed such things as their human birthright, why did they not communicate them in their texts and show them in their art?

This isn’t just about color. There is something extremely bizarre going on, according to what we moderns assume to the case about the human mind and perception.

Blue on Blue (Benjamin David Steele – a lot of material on Jaynes’s ideas here)

Another example is the fact that some cultures don’t have words of the type of relative directions that we have (left, right, etc.). Instead, they only have the cardinal directions—north, south, east, and west. This “exocentric orientation” gives them an almost superhuman sense of direction and orientation compared to people in Industrialized cultures:

In order to speak a language like Guugu Yimithirr, you need to know where the cardinal directions are at each and every moment of your waking life. You need to have a compass in your mind that operates all the time, day and night, without lunch breaks or weekends off, since otherwise you would not be able to impart the most basic information or understand what people around you are saying.

Indeed, speakers of geographic languages seem to have an almost-superhuman sense of orientation. Regardless of visibility conditions, regardless of whether they are in thick forest or on an open plain, whether outside or indoors or even in caves, whether stationary or moving, they have a spot-on sense of direction. They don’t look at the sun and pause for a moment of calculation before they say, “There’s an ant just north of your foot.” They simply feel where north, south, west and east are, just as people with perfect pitch feel what each note is without having to calculate intervals.

There is a wealth of stories about what to us may seem like incredible feats of orientation but for speakers of geographic languages are just a matter of course. One report relates how a speaker of Tzeltal from southern Mexico was blindfolded and spun around more than 20 times in a darkened house. Still blindfolded and dizzy, he pointed without hesitation at the geographic directions.

Does Your Language Shape How You Think? (New York Times)

The reference to perfect pitch is interesting, since it’s more likely for speakers of tonal languages (say, Mandarin Chinese or Vietnamese) to have perfect pitch than people who do not speak a tonal language (such as English). Another common feature of many languages is that statements, by their very syntactic structure, establish whether the speaker knows something for sure, or is making an extrapolation. For example:

…some languages, like Matsés in Peru, oblige their speakers, like the finickiest of lawyers, to specify exactly how they came to know about the facts they are reporting. You cannot simply say, as in English, “An animal passed here.” You have to specify, using a different verbal form, whether this was directly experienced (you saw the animal passing), inferred (you saw footprints), conjectured (animals generally pass there that time of day), hearsay or such. If a statement is reported with the incorrect “evidentiality,” it is considered a lie.

So if, for instance, you ask a Matsés man how many wives he has, unless he can actually see his wives at that very moment, he would have to answer in the past tense and would say something like “There were two last time I checked.” After all, given that the wives are not present, he cannot be absolutely certain that one of them hasn’t died or run off with another man since he last saw them, even if this was only five minutes ago. So he cannot report it as a certain fact in the present tense. Does the need to think constantly about epistemology in such a careful and sophisticated manner inform the speakers’ outlook on life or their sense of truth and causation?

Does Your Language Shape How You Think? (New York Times)

The Pirahã of the Brazilian Amazon have a number of these linguistic anomalies, as reported by Daniel Everett. Most famously, they do not use recursion in their language. They have essentially no numbering system—their only numbers are, one, two, and many. Nouns have no plural form. They have no simple categorical words for colors, rather they describe color in terms of various things in their environment, somewhat reminiscent of Homer’s graphic descriptions above:

I next noticed…that the Pirahãs had no simple color words, that is, no terms for color that were not composed of other words. I had originally simply accepted Steve Sheldon’s analysis that there were color terms in Pirahã. Sheldon’s list of colors consisted of the terms for black, white, red (also referring to yellow), and green (also referring to blue).

However, these were not simple words, as it turned out. They were phrases. More accurate translations of the Pirahã words showed them to mean: “blood is dirty” for black; “it sees” or “it is transparent” for white; “it is blood” for red; and “it is temporarily being immature” for green.

I believe that color terms share at least one property with numbers. Numbers are generalizations that group entities into sets that share general arithmetical properties, rather than object-particular, immediate properties. Likewise, as numerous studies by psychologists, linguists, and philosophers have demonstrated, color terms are unlike other adjectives or other words because they involve special generalizations that put artificial boundaries in the spectrum of visible light.

This doesn’t mean that the Pirahãs cannot perceive colors or refer to them. They perceive the colors around them like any of us. But they don’t codify their color experiences with single worlds that are inflexibly used to generalize color experiences. They use phrases.

“Don’t Sleep There Are Snakes” by Daniel Everett, p. 119

They also do not have any relative directions like ‘left’ and ‘right’; only absolute ones, much like Australian groups. In their culture, everything is oriented relative to the river beside which they live:

During the rest of our hunt, I noticed that directions were given either in terms of the river (upriver, downriver, to the river) or the jungle (into the jungle). The Pirahãs knew where the river was (I couldn’t tell-I was thoroughly disoriented). They all seemed to orient themselves to their geography rather than to their bodies, as we do when we use left hand and right hand for directions.

I didn’t understand this. I had never found the words for left hand and right hand. The discovery of the Pirahãs’ use of the river in giving directions did explain, however, why when the Pirahãs visited towns with me, one of their first questions was “Where is the river?” They needed to know how to orient themselves in the world!

Only years later did I read the fascinating research coming from the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, under the direction of Dr. Stephen C. Levinson. In studies from different cultures and languages, Levinson’s team discovered two broad divisions in the ways cultures and languages give local directions. Many cultures are like American and European cultures and orient themselves in relative terms, dependent on body orientation, such as left and right. This is called by some endocentric orientation. Others, like the Pirahas, orient themselves to objects external to their body, what some refer to as exocentric orientation.

“Don’t Sleep There Are Snakes” by Daniel Everett p. 216

Despite what some might characterize as simplicity, the verbs in the language display a remarkable complexity and nuance:

Although Pirahã nouns are simple, Pirahã verbs are much more complicated. Each verb can have as many as sixteen suffixes-that is, up to sixteen suffixes in a row. Not all suffixes are always required, however. Since a suffix can be present or absent, this gives us two possibilities for each of the sixteen suffixes-216 or 65,536, possible forms for any Pirahã verb. The number is not this large in reality because some of the meanings of different suffixes are incompatible and could not both appear simultaneously. But the number is still many times larger than in any European language. English only has in the neighborhood of five forms for any verb-sing, sang, sung, sings, singing. Spanish, Portuguese, and some other Romance languages have forty or fifty forms for each verb.

Perhaps the most interesting suffixes, however (though these are not unique to Pirahã), are what linguists call evidentials, elements that represent the speaker’s evaluation of his or her knowledge of what he or she is saying. There are three of these in Pirahã: hearsay, observation, and deduction…The placement of all the various suffixes on the basic verb is a feature of grammar. There are sixteen of these suffixes. Meaning plays at least a partial role in how they are placed. So, for example, the evidentials are at the very end because they represent a judgment about the entire event being described. DSTAS; pp. 196-197

This brings to mind a fascinating point that is not widely known: as material cultures become more complex, their languages actually become more simplified!

Comparing languages across differing cultures suggests an inverse relation between the complexity of grammar and the complexity of culture; the simpler the culture in material terms, the more complex the grammar. Mark Turin notes that colonial-era anthropologists set out to show that indigenous peoples were at a lower stage of evolutionary development than the imperial Western peoples, but linguistic evidence showed the languages of supposedly primitive peoples to have surprisingly complex grammar.

He writes: “Linguists were returning from the field with accounts of extremely complex verbal agreement systems, huge numbers of numeral classifiers, scores of different pronouns and nouns, and incredible lexical variation for terms that were simple in English. Such languages appeared to be untranslatable…­(p.17)…Thus the languages of simpler cultures tend to pack grammatical information into single words, whereas those of industrial society tend to use separate words in combination to create grammatical distinctions…(p.52)…In some languages, entire sentences are packed into a single word. Nicholas Evans and Stephen Levinson give the examples of Ęskakhǭna’tàyęthwahs from the Cayuga of North America, which means “I will plant potatoes for them again,” and abanyawoihwarrgahmarneganjginjeng from the Northern Australian language Bininj Gun-wok, and means “I cooked the wrong meat for them again.” (pp. 16-17)

“The Truth About Language” by Michael C. Corballis

Last time we referred to the substantial differences in behavior that were discovered by Joseph Henrich, et alia, between Western “WEIRD” cultures and, well, just about everyone else.

As Heine, Norenzayan, and Henrich furthered their search, they began to find research suggesting wide cultural differences almost everywhere they looked: in spatial reasoning, the way we infer the motivations of others, categorization, moral reasoning, the boundaries between the self and others, and other arenas. These differences, they believed, were not genetic.

The distinct ways Americans and Machiguengans played the ultimatum game, for instance, wasn’t because they had differently evolved brains. Rather, Americans, without fully realizing it, were manifesting a psychological tendency shared with people in other industrialized countries that had been refined and handed down through thousands of generations in ever more complex market economies.

When people are constantly doing business with strangers, it helps when they have the desire to go out of their way (with a lawsuit, a call to the Better Business Bureau, or a bad Yelp review) when they feel cheated. Because Machiguengan culture had a different history, their gut feeling about what was fair was distinctly their own. In the small-scale societies with a strong culture of gift-giving, yet another conception of fairness prevailed. There, generous financial offers were turned down because people’s minds had been shaped by a cultural norm that taught them that the acceptance of generous gifts brought burdensome obligations. Our economies hadn’t been shaped by our sense of fairness; it was the other way around.

The growing body of cross-cultural research that the three researchers were compiling suggested that the mind’s capacity to mold itself to cultural and environmental settings was far greater than had been assumed. The most interesting thing about cultures may not be in the observable things they do—the rituals, eating preferences, codes of behavior, and the like—but in the way they mold our most fundamental conscious and unconscious thinking and perception.

We Aren’t the World (Pacific Standard)

It brings to mind another old adage: “What we call human nature is really human habit.” That may not be true for everything, but it looks it may be true for at least some things.


Jaynes makes a great deal about the fact that the Greek language lacked any reference to an inner decision-making process (mind), or to any kind of “soul” apart from the body. When it isn’t locating the source of actors’ motivations in the gods speaking directly to them, it is locating it in various parts of the body or internal organs. The terms used in place of any kind of reference to mind or spirit are often body parts—heart, chest, lungs, liver, spleen, guts, and so on. These body parts later come to refer to a mind or soul (e.g. nous or psyche), but only much later. Psyche, for example, initially referred to ‘breath’, and nous (noos) referred to vision. Only much later do these words become associated with concepts of spirit, soul, or self. Put another, somewhat more precise, way by Brian McVeigh, “[L]inguo-conceptual changes [reflect] psychohistorical developments; because supernatural entities functioned in place of our inner selves, vocabularies for psychological terms were strikingly limited in ancient languages.” Jaynes writes:

There is in general no consciousness in the Iliad. I am saying ‘in general’ because I shall mention some exceptions later. And in general, therefore, no words for consciousness or mental acts. The words in the Iliad that in a later age come to mean mental things have different meanings, all of them more concrete. The word psyche, which later means soul or conscious mind, is in most instances life-substances, such as blood or breath: a dying warrior breathes out his psyche onto the ground or breathes it our in his last gasp.

The thumos, which later comes to mean something like emotional soul, is simply motion or agitation. When a man stop moving, the thumos leaves his limbs. But it is also somehow like an organ itself, for when Glaucus prays to Apollo to alleviate his pain and to give strength to help his friend Sarpedon, Apollo hears his prayer and “casts strength in his thumos“. The thumos can tell a man to eat, drink, or fight. Diomedes says in one place that Achilles will fight “when the thumos in his chest tells him to and a god rouses him.” But it is not really an organ and not always localized; a raging ocean has thumos.

A word of somewhat similar use is phren, which is always localized anatomically as the midriff, or sensations in the midriff, and is usually used in the plural. It is the phrenes of Hector that recognize that his brother is not near him; this means what we mean by “catching one’s breath in surprise”. It is only centuries later that it comes to mean mind or ‘heart’ in its figurative sense.

Perhaps most important is the word noos which, spelled as nous in later Greek, comes to mean conscious mind. It comes from the world noeein, to see. Its proper translation in the Iliad would be something like perception or recognition or field of vision. Zeus “holds Odysseus in his noos.” He keeps watch over him.

Another important word, which perhaps comes from the doubling of the word meros (part), is mermera, meaning in two parts. This was made into a verb by adding the ending -izo, the common suffix which can turn a noun into a verb, the resulting word being mermerizein, to be put into two parts about something. Modern translators, for the sake of supposed literary quality in their work, often use modern terms and subjective categories which are not true to the orignal. Mermerizein is thus wrongly translated as to ponder, to think, to be of divided mind, to be troubled about, to try to decide. But essentially it means to be in conflict about two actions, not two thoughts. It is always behavioristic. It is said several times of Zeus, as well as others. The conflict is often said to go on in the thumos, or sometimes in the phrenes, but never in the noos. The eye cannot doubt or be in conflict, as the soon-to-be-invented conscious mind will be able to.

These words are in general, and with certain exception, the closest that anyone, authors or characters or gods, usually get to having conscious minds or thoughts.

There is also no concept of will or word for it, the concept developing curiously late in Greek thought. Thus, Iliadic men have no will of their own and certainly no notion of free will. Indeed, the whole problem of volition, so troubling, I think, to modern psychological theory, may have had its difficulties because the words for such phenomena were invented so late.

A similar absence from Iliadic language is a word for body in our sense. The word soma, which in the fifth century B.C. comes to mean body, is always in the plural in Homer and means dead limbs or a corpse. It is the opposite of psyche. There are several words which are used for various parts of the body, and, in Homer, it is always these parts that are referred to, and never the body as a whole.

Now this is all very peculiar. If there is no subjective consciousness, no mind, soul, or will, in Iliadic men, what then imitates behavior? OoCitBotBM; pp. 69-71

Essentially, what Jaynes is doing is trying to use language to understand the consciousness of these ancient people, similar to what we saw anthropologists and linguists doing for the various remote and isolated cultures currently in existence. Their language may not dictate reality, but the words they use to describe their world offer a clue, perhaps the only clue, as to how they perceive themselves, their world, and their place in it; and how it might be different than our ego-driven point of view. After all, we can’t just hop in a time machine and head back to administer psychological tests.

P.S. As an aside to the idea of aural hallucinations, a fascinating study found that non-clinical voice hearers could distinguish “hidden speech” far more effectively than others. This is especially interesting since most studies featuring voice-hearers use the clinical (schizophrenic, epileptic, Parkinson’s, etc.) population, rather than ordinary people. The reasons for this ability are not known:

The study involved people who regularly hear voices, also known as auditory verbal hallucinations, but do not have a mental health problem. Participants listened to a set of disguised speech sounds known as sine-wave speech while they were having an MRI brain scan. Usually these sounds can only be understood once people are either told to listen out for speech, or have been trained to decode the disguised sounds.

Sine-wave speech is often described as sounding a bit like birdsong or alien-like noises. However, after training people can understand the simple sentences hidden underneath (such as “The boy ran down the path” or “The clown had a funny face”).

In the experiment, many of the voice-hearers recognised the hidden speech before being told it was there, and on average they tended to notice it earlier than other participants who had no history of hearing voices.The brains of the voice-hearers automatically responded to sounds that contained hidden speech compared to sounds that were meaningless, in the regions of the brain linked to attention and monitoring skills.

People who ‘hear voices’ can detect hidden speech in unusual sounds (Science Daily)

P.P.S. xkcd did a public survey on color perception and naming a while back:
https://blog.xkcd.com/2010/05/03/color-survey-results/
https://xkcd.com/color/rgb/

The Archaic Mentality

The inspiration for this series of posts was an article in Psychology Today entitled: Did Our Ancestors Think Like Us? I’m pretty confident that they didn’t, but in what sense did their differ? Were they as different as Jaynes described, or was it something less extreme?

Imagine that you are a time-traveler, able to travel back roughly 40,000 years to the age of the first anatomically modern homo sapiens. Imagine stepping out of your time machine and standing face to face with one of your ancestors: Another human with a brain just as big as yours, and genes virtually identical to your genes. Would you be able to speak to this ancient human? Befriend them? Fall in love with them? Or would your ancestor be unrecognizable, as distinct from you as a wolf is distinct from a pet dog?

…Some think that, since we have the same genes as ancient humans, we should show the same mannerisms. Others suspect that human psychology may have changed dramatically over time. Nobody definitely knows (I certainly don’t), but my hunch is that the human mind today works very differently than did our ancestor’s minds.

Did Our Ancestors Think Like Us? (Psychology Today)

Brian McVeigh sums up Jaynes’s ideas this way:

In The Origin of Consciousness in the Breakdown of the Bicameral Mind [Jaynes] argued that conscious subjective interiority was not a bioevolutionary phenomenon. Rather, interiority—and by this term he did not mean perceiving, thinking or reasoning but the ability to introspect and engage in self-reflectivity—emerged historically as a cultural construction only about three millennia ago.
The Psychohistory of Metaphors, Brian McVeigh p. 133

I would argue that there is recent psychological research that tentatively backs up some of Jaynes’ claims. New research has shown that a lot of what we thought was just “basic human cognition” turns out to be socioculturally constructed. Much of the world today does not think or reason in the same way as members of Western industrial societies do. The blogger writes:

Many animals learn how to solve problems by watching other animals try and fail, but humans appear to take social learning to another level: we learn how to think from one another.

Consider that when people move to a new culture, they actually begin taking on the emotions of that culture, reporting more everyday sadness in cultures that feel more sadness and surprise in cultures where people feel more surprise. Consider that people’s ability to read others’ thoughts and feelings from their behavior depends on the number of words in their native language indicating mental states. Consider that people’s level of prejudice towards other groups (i.e. the extent of their “us versus them” mentality) and moral convictions (i.e. their belief that some acts are fundamentally right or wrong) strongly depends on whether or not they follow an Abrahamic religion. And consider that people’s ability to think “creatively,” to generate new solutions that diverge from old ones, depends on how strictly their culture regulates social norms. This is just a small sampling from hundreds of studies that show how flexible the human mind is.

For a graphic example, it was recently determined that the “primitive” Himba of Namibia are actually more mental agile than supposedly “high IQ” Westerners at solving novel problems:

“We suggest that through formal education, Westerners are trained to depend on learned strategies. The Himba participate in formal education much less often and this is one possible reason why they exhibited enhanced cognitive flexibility,”

Cognitive neuroscientists observe enhanced mental flexibility in the seminomadic Himba tribe (PsyPost). He continues:

The second reality that makes me think our minds work differently today than they did thousands of years ago is that human culture is staggeringly diverse. We speak over 6,000 languages, follow 4,000 religions, and live our lives according to a sprawling set of social and moral customs. Some other animals have diverse culture: Chimpanzees, for example, forage for food in a number of different ways that are probably socially learned. But human cultural diversity goes beyond one or two kinds of differences; our cultures are different in almost every way imaginable. The development of this cultural diversity may have had a profound impact on our psychologies.

When you put these realities together, you have (a) an amazingly diverse species with (b) an amazing capacity to learn from diversity. Add thousands of years of development and cultural change to the mix and you likely get modern human thinking that scarcely resembles ancient human psychology. This doesn’t mean that today’s humans are “better” than yesterday’s; it just means that humans are fascinating animals, more cognitively malleable than any other.

The writer doesn’t get into more detail than that, and there aren’t any further explanations so far. But the idea was backed up by a landmark paper which came out a few years ago by was Joseph Henrich, along with Steven J. Heine and Ara Norenzayan. They write:

There are now enough sources of experimental evidence, using widely differing methods from diverse disciplines, to indicate that there is substantial psychological and behavioral variation among human populations.

The reasons that account for this variation may be manifold, including behavioral plasticity in response to different environments, divergent trajectories of cultural evolution, and, perhaps less commonly, differential distribution of genes across groups in response to different selection pressures… At the same time, we have also identified many domains in which there are striking similarities across populations. These similarities could indicate reliably developing pan-human adaptations, byproducts of innate adaptations (such as religion), or independent cultural inventions or cultural diffusions of learned responses that have universal utility (such as counting systems, or calendars)…

Not only aren’t Americans typical of how the rest of the world thinks, but Americans are shockingly different (surprising, huh?). As one writer put it, “Social scientists could not possibly have picked a worse population from which to draw broad generalizations. Researchers had been doing the equivalent of studying penguins while believing that they were learning insights applicable to all birds.”

As you might imagine, one of the major differences has to do with radical individualism. Americans see themselves as “rugged individualists,” whereas everyone else sees themselves as part of a larger social fabric:

[S]ome cultures regard the self as independent from others; others see the self as interdependent. The interdependent self — which is more the norm in East Asian countries, including Japan and China — connects itself with others in a social group and favors social harmony over self-expression. The independent self — which is most prominent in America — focuses on individual attributes and preferences and thinks of the self as existing apart from the group.

…Unlike the vast majority of the world, Westerners (and Americans in particular) tend to reason analytically as opposed to holistically. That is, the American mind strives to figure out the world by taking it apart and examining its pieces. Show a Japanese and an American the same cartoon of an aquarium, and the American will remember details mostly about the moving fish while the Japanese observer will likely later be able to describe the seaweed, the bubbles, and other objects in the background. Shown another way, in a different test analytic Americans will do better on…the “rod and frame” task, where one has to judge whether a line is vertical even though the frame around it is skewed. Americans see the line as apart from the frame, just as they see themselves as apart from the group.

Are Americans the Weirdest People on Earth? (Big Think)

As for why Americans, and WEIRD (Western, Educated, Industrial, Rich, Democratic) countries more generally, are so different than the rest of the world, the authors of the original paper speculate:

To many anthropologically-savvy researchers it is not surprising that Americans, and people from modern industrialized societies more generally, appear unusual vis-á-vis the rest of the species.

For the vast majority of its evolutionary history, humans have lived in small-scale societies without formal schools, government, hospitals, police, complex divisions of labor, markets, militaries, formal laws, or mechanized transportation. Every household provisioned much or all of their own food, made its own clothes, tools, and shelter, and–aside from various kinds of sexual divisions of labor–almost everyone had to master the same skills and domains of knowledge.

Children grew up in mixed age play groups, received little active instruction, and learned largely by observation and imitation. By age 10, children in some foraging societies obtain sufficient calories to feed themselves, and adolescent females take on most of the responsibilities of women.

WEIRD people, from this perspective, grow up in, and adapt, to a highly unusual environment. It should not be surprising that their psychological world is unusual as well. p. 38 (emphasis mine)

I wrote about this study back in 2013: Americans are WEIRD.

The differences in American thinking and the rest of the world seem to mirror the left brain/right brain split described by Ian McGilchrist:

The left hemisphere is dependent on denotative language, abstraction, yields clarity and power to manipulate things that are known and fixed. The right hemisphere yields a world of individual, changing, evolving, interconnected, living beings within the context of the lived world. But the nature of things is never fully graspable or perfectly known. This world exists in a certain relationship. They both cover two versions of the world and we combine them in different ways all the time. We need to rely on certain things to manipulate the world, but for the broad understanding of it, we need to use knowledge that comes from the right hemisphere.

A Psychiatrist Explains the Difference Between Left Brain and Right Brain (Hack Spirit)

Given that thousands of years ago, there were NO industrial countries with a majority of the population educated, wealthy, or literate, it’s pretty obvious that thinking must have been quite different. Of course, that does not prove Jaynes’s ideas. However, if even modern psychology researchers report substantial differences among existing populations, why it hard to believe that people separated from us by thousands of years in time are more different that us than alike?

It’s also worth pointing out that the fundamental structure of our brain changes in response to activities we undertake to navigate our environment. It’s been hypothesized that the use of the internet and ubiquitous computer screens are “rewiring” our brains in some, possibly nefarious, way. An article about this topic in the BBC points out that this is not new–everything we do rewires our brains in some way. In other words, we do not come into the world completely “done” – much of how our brains function is culturally determined. This, in turn, changes the brain’s structure. So we need not posit that somehow the brain architecture of bicameral people was radically different, only that they were using their brains in a different way as determined by the cultural context.

We regularly do things that have a profound effect on our brains – such as reading or competitive sports – with little thought for our brain fitness. When scientists look at people who have spent thousands of hours on an activity they often see changes in the brain. Taxi drivers, famously, have a larger hippocampus, a part of the brain recruited for navigation. Musicians’ brains devote more neural territory to brain regions needed for playing their instruments. So much so, in fact, that if you look at the motor cortex of string players you see bulges on one side (because the fine motor control for playing a violin, for example, is only on one hand), whereas the motor cortex of keyboard players bulges on both sides (because piano playing requires fine control of both hands).

Does the internet rewire our brains? (BBC Future)

In a book I cited earlier, Alone in the World? the author lists the items that archaeologists look for to indicate behavioral modernity (since culture is ephemeral and does not fossilize):

1. A spoken language;

2. The cognitive capacity to generate mental symbols, as expressed in art and religion;

3. Explicit symbolic behavior, i.e., the ability to represent objects, people, and abstract concepts with arbitrary symbols, vocal or visual, and to reify such symbols in cultural practices like painting, engraving, and sculpture;

4. The capacity for abstract thinking, the ability to act with reference to abstract concepts not limited to time and space;

5. Planning depth, or the ability to formulate strategies based on past experience and to act one them in group context;

6. Behavioral, economic, and technological innovation; and

7. A bizarre inability to sustain prolonged bouts of boredom.

Often people cite the spectacular cave art of Ice Age Europe as evidence that the people living in such caves must have been behaviorally modern. But consider that some of the most sought-after art in the twentieth century was made by patients suffering from schizophrenia (voice hearing)!

The Julian Jaynes Society has compiled a list of questions about the behavior of ancient peoples that are difficult to explain without recourse to some kind of bicameral theory. I’ve copied and abridged their list below:

1. The Saliency and “Normalcy” of Visions in Ancient Times. Why have hallucinations of gods in the ancient world been noted with such frequency?

2. The Frequency of “Hearing Voices” Today. Why do auditory hallucinations occur more frequently in the general population than was previously known? If hallucinations are simply a symptom of a dysfunctional brain, they should be relatively rare. Instead, they have been found in normal (non-clinical) populations worldwide.

3. Imaginary Companions in Children. Why do between one-quarter and one-third of modern children “hear voices,” called imaginary companions?

4. Command Hallucinations. Why do patients labeled schizophrenic, as well as other voice-hearers, frequently experience “command hallucinations” that direct behavior — as would be predicted by Jaynes’s theory? If hallucinations are simply a symptom of a dysfunctional brain, one would expect they would consist of random voices, not commentary on behavior and behavioral commands.

5. Voices and Visions in Pre-literate Societies. Why are auditory and visual hallucinations, as well as divination practices and visitation dreams, found in pre-literate societies worldwide?

6. The Function of Language Areas in the Non-Dominant Hemisphere. Why is the brain organized in such a way that the language areas of the non-dominant hemisphere are the source of auditory hallucinations — unless this provided some previous functional purpose?

7. The “Religious” Function of the Right Temporal Lobe. Why is right temporal lobe implicated in auditory hallucinations, intense religious sentiments, and the feeling of a sensed presence?

8. Visitation Dreams. Why do ancient and modern dreams differ so dramatically? Studies of dreams in classical antiquity show that the earliest recorded dreams were all “visitation dreams,” consisting of a visitation by a god or spirit that issues a command — essentially the bicameral waking experience of hearing verbal commands only during sleep. This has also been noted in tribal societies.

9. The Inadequacy of Current Thinking to Account for the Origin of Religion. Why are the worship of gods and dead ancestors found in all cultures worldwide?

10. Accounting for the Ubiquity of Divination. Similarly, why were divination practices also universal?

Jaynes’s theory of a previous bicameral mentality accounts for all of these phenomena, and, in the complete absence of persuasive alternative explanations, appears to be the best explanation for each of them. As one professor once said to me, “There is either Jaynes’s theory, or just ‘weird stuff happens.'”

Questions critics fail to answer (Julian Jaynes Society)

Weird stuff, indeed!!! But there is another, perhaps even more important question not listed above. That is, why did religious concepts change so profoundly during the Axial Age? As Joseph Henrich, the anthropologist whose paper we cited above put it:

“The typical evolutionary approaches to religion don’t take into account that the kinds of gods we see in religions in the world today are not seen in small-scale societies. I mentioned the ancestor gods; other kinds of spirits can be tricked, duped, bought off, paid; you sacrifice in order to get them to do something; they’re not concerned about moral behavior…Whatever your story is, it’s got to explain how you got these bigger gods.”

Joseph Henrich on Cultural Evolution, WEIRD Societies (Conversation with Tyler)

In researching these series of posts, I’m struck by just how big a gulf there is between (to use Evens-Pritchard’s terms) Primitive Religion and Revelatory Religion.

Primitive religion, for all its dramatic variance, appears to be centered around direct revelation from gods, ancestor worship, and communal rituals. It is almost always rooted in some kind of animist belief system, and is always polytheistic.

Revelatory religions, by contrast, tend to emphasize conscious control over one’s own personal behavior (e.g. the ‘Golden Rule’). They emphasize looking for revelation by introspection—going inward—something conspicuously missing from primitive religions. Instead of direct revelation, God’s words are now written down in holy books which are consulted to determine God’s will, permanent and unchanging. Monotheism takes over from polytheism. And a significant portion of the population, unlike in primitive societies, accepts no god at all [atheism = a (without) theos (gods)]. As Brian McVeigh writes, quoting St. Augustine, “By shifting the locus of ‘spiritual activity from external rites and laws into the individual, Christianity brought God’s infinite value into each person.’ In other words, a newly spiritualized space, first staked out by Greek philosophers, was meta-framed and expanded into an inner kingdom where individual and Godhead could encounter each other.” (Psychohistory of Metaphors, pp. 52-53)

For his part Henrich and other researchers hypothesize that the difference comes from the fact that Universal Religions of Revelation (so-called “Big Gods”) allowed for larger and more diverse groups of people to cooperate, thus outcompeting parochial deities who couldn’t “scale up.” Because the “Big Gods” were all-seeing, all-knowing, omnipresent, moralizing deities with the power to reward and punish in the afterlife, they argue, it kept people on the straight-and-narrow, allowing for more higher-level cooperation between unrelated strangers even without shared cultural context. Basically, it was a meme that evolved via group selection. As they put it (PDF): “[C]ognitive representations of gods as increasingly knowledgeable and punitive, and who sanction violators of interpersonal social norms, foster and sustain the expansion of cooperation, trust and fairness towards co-religionist strangers.”

I call this “The Nannycam theory of Religion”. As God remarked to Peter Griffin on Family Guy, “I’m kind of like a nannycam. The idea that I *may* exist is enough for some people to behave better.”

By contrast, the breakdown of the bicameral mind provides an explanation. God now becomes one’s own conscience—the inner voice in one’s head. We now become responsible for our own behavior through the choices we make. The revelatory religions serve as a guide, and a replacement for the voices that no longer issue their commands. As Brian McVeigh explains:

…interiority is unnecessary for most of human behavior. If this is true, why did we as a species develop it about three thousand years ago (at least according to Julian Jaynes)? What was its purpose?

From the perspective of a sociopolitical organization [sic], interiority alleviates the need for strict heirarchical lines of command and control, which are inherently fragile. By placing a personal tool kit of command and control “inside a person’s head,” interiority becomes society’s inner voice by proxy.

Authorization based on strict hierarchical lines of command and control may be efficient for relatively small, well-circumscribed communities, but if history is any teacher, clear lines of control become less cost-effective in terms of socioeconomic capital the larger and more complex organizations become.

One authorization for immediate control of self becomes interiorized and individual-centered, an organization actually becomes stronger as its orders, directives, doctrines, admonitions, and warnings become the subjective truths of personal commitment.

Interiority, then, is a sociopolitically pragmatic tool used for control in the same way assigning names to individuals or categorizing people into specialized groups for economic production is. From the individual’s perspective, interiority makes the social environment easier to navigate. Before actually executing a behavior, we can “see” ourselves “in our heads” carrying out an action, thereby allowing us to shortcut actual behavioral sequences that may be time-consuming, difficult, or dangerous.
Brian J. McVeigh; A Psychohistory of Metaphors, pp. 33-34

There are many more “conventional” explanations of the universality of religious beliefs. One popular theory is put forward by anthropologist Pascal Boyer in “Religion Explained.” Basically, he argues that religion is an unintended side effect of  what software programmers would refer to as “bugs” in the human cognitive process:

Basing his argument on this evolutionary reasoning, Boyer asserts that religion is in effect a cognitive “false positive,” i.e., a faulty application of our innate mental machinery that unfortunately leads many humans to believe in the existence of supernatural agents like gods that do not really exist.

This also leads Boyer to describe religious concepts as parasitic on ordinary cognitive processes; they are parasitic in the sense that religion uses those mental processes for purposes other than what they were designed by evolution to achieve, and because of this their successful transmission is greatly enhanced by mental capacities that are there anyway, gods or no gods.

Boyer judges the puzzling persistence of religion to be a consequence of natural selection designing brains that allowed our prehistoric ancestors to adapt to a world of predators. A brain molded by evolution to be on the constant lookout for hidden predators is likely to develop the habit of looking for all kinds of hidden agencies. And it is just this kind of brain that will eventually start manufacturing images of the concealed actors we normally refer to as “gods.”

In this sense, then, there is a natural, evolutionary explanation for religion, and we continue to entertain religious ideas simply because of the kinds of brains we have. On this view, the mind it takes to have religion is the mind we have…Religious concepts are natural both in the phenomenological sense that they emerge spontaneously and develop effortlessly, and in the natural sense that also religious imagination belongs to the world of nature and is naturally constrained by genes, central nervous systems, and brains.
J. Wentzel van Huyssteen; Alone In The World? pp. 261-263

Of course, as Jaynes would point out, the gods as depicted in ancient literature are hardly “hidden actors.” They often speak directly to individuals and issue commands which are subsequently obeyed! Massive amounts of time and effort are spent building temples to them. That seems like an awful lot of work to satisfy a simple “false positive” in human cognition.

Other theories focus on what’s called the Theory of Mind. For example: What Religion is Really All About (Psychology Today). As a Reddit commenter put it succinctly:

The basic thesis is that we believe in gods (or supernatural minds in general) because of cognitive adaptations that evolved for social interaction. It was evolutionarily advantageous for monkeys to construct mental models of what other monkeys were feeling/perceiving/thinking, and it’s a natural step from there to believing in disembodied minds, minds that can exist without the monkey. Related YouTube lecture: Why We Believe In Gods.

Testimony to the Sumerian worship of the Cookie Monster

Perhaps. But there are an awful lot of signs in the archaeological record that our ancestors thought very differently than we do, to wit:

1. Eye idols (see above)

2. “Goddess” figurines and idols Jaynes: “Figurines in huge numbers have been unearthed in most of the Mesopotamian cultures, at Lagash, Uruk, Nippur, and Susa. at Ur, clay figures painted in black and red were found in boxes of burned brick placed under the floor against the walls but with one end opened, facing into the center of the room. The function of all these figurines, however, is as mysterious as anything in all archaeology. The most popular view goes back to the uncritical mania with which ethnology, following Frazer, wished to find fertility cults at the drop of a carved pebble. But if such figurines indicate something about Frazerian fertility, we should not find them where fertility was no problem. But we do.” Origins, p. 166. As the old joke in archaeology goes, if you can’t explain something, just claim it was for ‘fertility.’

3. Human Sacrifice

4. Trepanation

5. God kings:
Jaynes: “I am suggesting that the dead king, thus propped up on his pillow of stones, was in the hallucinations of his people still giving forth his commands…and that, for a time at least, the very place, even the smoke from its holy fire, rising into visibility from furlongs around, was, like the gray mists of the Aegean for Achilles, a source of hallucinations and of the commands that controlled the Mesolithic world of Eynan.

This was a paradigm of what was to happen in the next eight millennia. The king dead is a living god. The king’s tomb is the god’s house…[which]…continues through the millennia as a feature of many civilizations, particularly in Egypt. But, more often, the king’s-tomb part of the designation withers away. This occurs as soon as successor to a king continues to hear the hallucinated voice of his predecessor during his reign, and designates himself as the dead king’s priest or servant, a pattern that is followed throughout ancient Mesopotamia. In place of the tomb is similarly a temple. And in place of the corpse is a statue, enjoying even more service and reverence, since it does not decompose.” Origins, pp. 142-43

6. Grave goods

7. Cannibalism

8. Veneration of ancestors

9. Mummification of animals

Not to mention things like this:

A common practice among these city dwellers [of Çatalhöyük] was burying their dead under their floors, usually under raised platforms that served as beds. Often they would dig up the skulls of the dead later, plaster their faces (perhaps to recreate the faces of loved ones), and give them to other houses. Archaeologists frequently find skeletons from several people intermingled in these graves, with skulls from other people added. Wear and tear on some plastered skulls suggest they were traded back and forth, sometimes for generations, before being reburied. According to Hodder, such special skulls are just as often female as they are male.

Incredible discovery of intact female figurine from neolithic era in Turkey (Ars Technica)

The Voices in Your Head

What If God Was One Of Us?

What We Talk About When We Talk About Consciousness

The Cathedral of the Mind

One of Oliver Sacks’ last popular books, published in 2012, was about hallucinations, titled, appropriately, Hallucinations. In it, he takes a look at numerous types of hallucinatory phenomena—hallucinations among the blind (Charles Bonnett Syndrome); sensory deprivation; delirium; grieving; Post-traumatic Stress Disorder; epilepsy; migraines; hypnagogia; Parkinson’s Disease, psychedelic usage; religious ecstasy; and so on.

There are a number of interesting facts presented about auditory hallucinations.
One is the fact that although auditory hallucination of voices is indeed indicative of schizophrenia, in most cases auditory hallucinations are experienced by perfectly normal people with no other signs of mental illness.

Sacks begins his chapter on auditory hallucinations by describing an experiment in 1973 where eight “fake” patients went to mental hospitals complaining of hearing voices, but displaying no other signs of mental illness or distress. In each case, they were diagnosed as schizophrenic (one was considered manic-depressive), committed to a facility for two months, and given anti-psychotic medication (which they obviously did not take). While committed, they even openly took notes on their experiences, yet none of the doctors or staff ever wised up to the ruse. The other patients, however, were much more perceptive. They could clearly see that the fake patients were not at all mentally ill, and even asked them, “what are you doing here?” Sacks concludes:

This experiment, designed by David Rosenhan, a Stanford psychologist (and himself a pseudopatient), emphasized, among other things, that the single symptom of “hearing voices” could suffice for an immediate, categorical diagnosis of schizophrenia even in the absence of any other symptoms or abnormalities of behavior. Psychiatry, and society in general, had been subverted by the almost axiomatic belief that “hearing voices” spelled madness and never occurred except in the context of severe mental disturbance. p. 54

While people often mischaracterize Jaynes’ theory as “everyone in the past was schizophrenic,” it turns out that even today most voices are heard by perfectly normal, otherwise rational, sane, high-functioning people. This has been recognized for over a century in medical literature:

“Hallucinations in the sane” were well recognized in the nineteenth century, and with the rise of neurology, people sought to understand more clearly what caused them. In England in the 1880s, the Society for Psychical Research was founded to collect and investigate reports of apparitions or hallucinations, especially those of the bereaved, and many eminent scientists—physicians as well as physiologists and psychologists—joined the society (William James was active in the American branch)…These early researchers found that hallucinations were not uncommon among the general population…Their 1894 “International Census of Waking Hallucinations in the Sane” examined the occurrence and nature of hallucinations experienced by normal people in normal circumstances (they took care to exclude anyone with obvious medical or psychiatric problems). Seventeen thousand people were sent a single question:

“Have you ever, when believing yourself to be completely awake, had a vivid impression of seeing or being touched by a living being or inanimate object, or of hearing a voice, which impression, as far as you could discover, was not due to an external physical cause?”

More than 10 percent responded responded in the affirmative, and of those, more than a third heard voices. As John Watkins noted in his book Hearing Voices, hallucinated voices “having some kind of religious or supernatural content represented a small but significant minority of these reports.” Most of the hallucinations, however, were of a more quotidian character. pp. 56-57

While the voices heard by schizophrenics are often threatening and controlling, the voices heard by most people do not appear to have any effect on normal functioning at all.

The voices that are sometimes heard by people with schizophrenia tend to be accusing, threatening, jeering, or persecuting. By contrast, the voices hallucinated by the “normal” are often quite unremarkable, as Daniel Smith brings out in his book Muses, Madmen, and Prophets: Hearing Voices and the Borders of Sanity. Smith’s own father and grandfather heard such voices, and they had different reactions. His father started hearing voices at the age of thirteen. Smith writes:

“These voices weren’t elaborate, and they weren’t disturbing in content. They issued simple commands. They instructed him, for instance, to move a glass from one side of the table to another or to use a particular subway turnstile. Yet in listening to them and obeying them his interior life became, by all accounts, unendurable.”

Smith’s grandfather, by contrast, was nonchalant, even playful, in regard to his hallucinatory voices. He described how he tried to use them in betting at the racetrack. (“It didn’t work, my mind was clouded with voices telling me that this horse could win or maybe this one is ready to win.”) It was much more successful when he played cards with his friends. Neither the grandfather nor the father had strong supernatural inclinations; nor did they have any significant mental illness. They just heard unremarkable voices concerned with everyday things–as do millions of others. pp. 58-59

To me, this sounds an awful lot like Jaynes’s descriptions of the reality of bicameral man doesn’t it? The voices command, and the people obey the commands. Yet they still are outwardly normal, functioning individuals. You may never know whether someone is obeying voices in their head unless they explicitly told you:

This is what Jaynes calls “bicameral mind”: one part of the brain (the “god” part) evaluates the situation and issues commands to the other part (the “man” part) in the form of auditory and, occasionally, visual hallucinations (Jaynes’ hypothesises that the god part must have been located in the right hemisphere, and the man part, in the left hemisphere of the brain). The specific shapes and “identities” of these hallucinations depend on the culture, on what Jaynes calls “collective cognitive imperative”: we see what we are taught to see, what our learned worldview tells us must be there.

Julian Jaynes and William Shakespeare on the origin of consciousness in the breakdown of bicameral mind (Sonnets in Colour)

In most of the cases described in Hallucinations, people didn’t attribute their auditory or visual hallucinations to any kind of supernatural entity or numinal experience. A few did refer to them as “guardian angels”. But what if they had grown up in a culture where this sort of thing was considered normal, if not commonplace, as was the case for most of ancient history?

Hearing voices occurs in every culture and has often been accorded great importance–the gods of Greek myth often spoke to mortals, and the gods of the great monotheistic traditions, too. Voices have been significant in this regard, perhaps more so than visions, for voices, language, can convey an explicit message or command as images alone cannot.

Until the eighteenth century, voices—like vision—were ascribed to supernatural agencies: gods or demons, angels or djinns. No doubt there was sometimes and overlap between such voices and those of psychosis or hysteria, but for the most part, voices were not regarded as pathological; if they stayed inconspicuous and private, they were simply accepted as part of human nature, part of the way it was with some people. p. 60

In the book The Master and His Emissary, Iain McGilchrist dismisses Jaynes’s theory by claiming that schizophrenia is a disease of recent vintage, and only emerged sometime around the nineteenth century. Yet, as the Jaynes foundation website points out (2.7), this is merely when the diagnosis of schizophrenia was established. Before that time, it would not have been considered pathological or a disease at all. We looked at how Akhnaten’s radical monotheism was possibly inspired by God “speaking” directly to him, issuing commands to build temples, and so forth. Certainly, he thought it was, at any rate. And he’s hardly alone. We’ve already looked at notable historical personages like Socrates, Muhammad, Joan of Arc, and Margery Kempe, and there are countless other examples. Schizophrenia is no more “new” than is PTSD, which was barely recognized until after Wold War One, where it was called “shellschock.”

“My Eyes in the Time of Apparition” by August Natterer. 1913

Another thing Sacks points out is that command hallucinations tend to occur in stressful situations or times of extreme duress, or when one has some sort of momentous or climactic decision to make, just as Jaynes posited. In times of stress, perfectly ordinary, sane people often hear an “outside” voice coming from somewhere guiding their actions. This is, in fact, quite common. In “normal” conditions we use instinct or reflex to guide our actions. But in emergencies, we hear a voice that seems to come from somewhere outside our own consciousness:

If, as Jaynes proposes, we take the earliest texts of our civilisation as psychologically valid evidence, we begin to see a completely different mentality. In novel and stressful situations, when the power of habit doesn’t determine our actions, we rely on conscious thinking to decide what to do, but, for example, the heroes of Iliad used to receive their instructions from gods — which would appear in the times of uncertainty and stress.

Julian Jaynes and William Shakespeare on the origin of consciousness in the breakdown of bicameral mind (Sonnet in Colour)

For example, Dr. Sacks is perfectly aware that his “inner monologue” is internally generated. Yet in a stressful situation, the voice became externalized—something that seemed to speak to him from some outside source:

Talking to oneself is basic to human beings, for we are linguistic species; the great Russian psychologist Lev Vygotsky thought that inner speech was a prerequisite of all voluntary activity. I talk to myself, as many of us do, for much of the day–admonishing myself (“You fool! Where did you leave your glasses?”), encouraging myself (“You can do it!”), complaining (“Why is that car in my lane?”) and, more rarely, congratulating myself (“it’s done!”). Those voices are not externalized; I would never mistake them for the voice of God, or anyone else.

But when I was in danger once, trying to descend a mountain with a badly injured leg, I heard an inner voice that was wholly unlike my normal babble of inner speech. I had a great struggle crossing a stream with a buckled and dislocating knee. The effort left me stunned, motionless for a couple of minutes, and then a delirious languor came over me, and I thought to myself, Why not rest here? A nap maybe? This was immediately countered by a strong, clear, commanding voice, which said, “You can’t rest here—you can’t rest anywhere. You’ve got to go on. Find a place you can keep up and go on steadily.” This good voice, the Life voice, braced and resolved me. I stopped trembling and did not falter again. pp. 60-61

Sacks gives some other anecdotal examples of people under extreme duress:

Joe Simpson, climbing in the Andes, also had a catastrophic accident, falling off an ice ledge and ending up in a deep crevasse with a broken leg. He struggled to survive, as he recounted in Touching the Void–and a voice was crucial in encouraging and directing him:

“There was silence, and snow, and a clear sky empty of life, and me, sitting there, taking it all in, accepting what I must try to achieve. There were no dark forces acting against me. A voice in my head told me that this was true, cutting through the jumble in my mind with its coldly rational sound.”

“It was as if there were two minds within me arguing the toss. The *voice* was clean and sharp and commanding. It was always right, and I listened to it when it spoke and acted on its decisions. The other mind rambled out a disconnected series of images, and memories and hopes, which I attended to in a daydream state as I set about obeying the orders of the *voice*. I had to get to the glacier….The *voice* told me exactly how to go about it, and I obeyed while my other mind jumped abstractly from one idea to another…The *voice*, and the watch, urged me into motion whenever the heat from the glacier halted me in a drowsy exhausted daze. It was three o’clock—only three and a half hours of daylight left. I kept moving but soon realized that I was making ponderously slow headway. It didn’t seem to concern me that I was moving like a snail. So long as I obeyed the *voice*, then I would be all right.”

Such voices may occur with anyone in situations of extreme threat or danger. Freud heard voices on two such occasions, as he mentioned in his book On Aphasia:

“I remember having twice been in danger of my life, and each time the awareness of the danger occurred to me quite suddenly. On both occasions I felt “this was the end,” and while otherwise my inner language proceeded with only indistinct sound images and slight lip movements, in these situations of danger I heard the words as if somebody was shouting them into my ear, and at the same time I saw them as if they were printed on a piece of paper floating in the air.

The fact that the gods tend to come to mortals in the Iliad during times of stress has been noted by author Judith Weissman, author of “Of two minds: Poets who hear voices”:

Judith Weissman, a professor of English at Syracuse University, notes that in the Iliad the gods speak directly to the characters over 30 times, often when the characters are under stress. Many of the communications are short, brief, exhortations. The most common godly command, issued when the men are fearful in battle, is to, “fight as your father did.” At one point in the Iliad, the god Apollo picks up Hektor, who has fallen in battle, and says, “So come now, and urge on your cavalry in their numbers / to drive on their horses against the hollow ships” (15.258-59)…

Personality Before the Axial Age (Psychology Today)

Hallucinations are also quite common in soldiers suffering from PTSD. If modern soldiers experience PTSD, how much more traumatic would be ancient battles, like those described so vividly in the Iliad? I can’t even imagine standing face-to-face with a foe, close enough to feel his hot breath, and having to shove a long, sharp metal object directly into his flesh without hesitation; blood gushing everywhere and viscera sliding out of his belly onto the dirt. And yet this was the reality of ancient warfare in the Bronze and Iron ages.  Not to mention the various plagues, dislocations, natural disasters, invasions, and other assorted traumatic events.

People with PTSD are also prone to recurrent dreams or nightmares, often incorporating literal or somewhat disguised repetitions of the traumatic experiences. Paul Chodoff, a psychiatrist writing in 1963 about the effects of trauma in concentration camp survivors, saw such dreams as a hallmark of the syndrome and note that in a surprising number of cases, they were still occurring a decade and half after the war. The same is true of flashbacks. p. 239

Veterans with PTSD may hallucinate the voices of dying comrades, enemy soldiers, or civilians. Holmes and Tinnin, in one study, found that the hearing of intrusive voices, explicitly or implicitly accusing, affected more than 65 percent of veterans with combat PTSD. p. 237 note 4

The other very common occurrence where otherwise “sane” people will often hallucinate sounds or images is during grief and bereavement. Sometimes this is just hearing the voice of the departed person speaking to them or calling them. Sometimes they may actually see the person. And sometimes they may even carry on extended conversations with their deceased family members!

Bereavement hallucinations, deeply tied to emotional needs and feelings, tend to be unforgettable, as Elinor S., a sculptor and printmaker, wrote to me:

“When I was fourteen years old, my parents, brother and I were spending the summer at my grandparents’ house as we had done for many previous years. My grandfather had died the winter before.”

“We were in the kitchen, my grandmother was at the sink, my mother was helping and I was still finishing dinner at the kitchen table, facing the back porch door. My grandfather walked in and I was so happy to see him that I got up to meet him. I said ‘Grampa,’ and as I moved towards him, he suddenly wasn’t there. My grandmother was visibly upset, and I thought she might have been angry with me because of her expression. I said to my mother that I had really seen him clearly, and she said that I had seen him because I wanted to. I hadn’t been consciously thinking of him and still do not understand how I could have seen him so clearly. I am now seventy-six years of age and still remember the incident and have never experienced anything similar.”

Elizabeth J. wrote to me about a grief hallucination experienced by her young son:

“My husband died thirty years ago after a long illness. My son was nine years old at the time; he and his dad ran together on a regular basis. A few months after my husband’s death, my son came to me and said that he sometimes saw his father running past our home in his yellow running shorts (his usual running attire). At the time, we were in family grief counselling, and when I described my son’s experience, the counsellor did attribute the hallucinations to a neurologic response to the grief. This was comforting to us, and I still have the yellow running shorts.” pp. 233-234

It turns out that this kind of thing is extremely common:

A general practitioner in Wales, W.D. Rees, interviewed nearly three hundred recently bereft people and found that almost half of them had illusions or full-fledged hallucinations of a dead spouse. These could be visual, auditory, or both—some of the people interviewed enjoyed conversations with their hallucinated spouses. The likelihood of such hallucinations increased with the length of the marriage, and they might persist for months or even years. Rees considered these hallucinations to be normal and even helpful in the mourning process. p. 234

A group of Italian psychological researchers published a paper in 2014 entitled “Post-bereavement hallucinatory experiences: A critical overview of population and clinical studies.” According to their paper, after an extensive review of peer-reviewed literature, they found that anywhere from 30 to 60 percent of grieving people experienced what they called “Post-bereavement hallucinatory experiences” (PBHEs). Is it any wonder why veneration of the dead was so common across cultures from the Old World to Africa to Asia to the Americas to Polynesia? It was almost universally assumed that the dead still existed in some way across ancient cultures. Some scholars such as Herbert Spencer posited that ancestor worship was the origin of all religious rites and practices.

What is the fundamental cause of all these aural hallucinations? As neurologist Sacks freely admits, the source of these phenomena is at present unknown and understudied. Sacks references Jaynes’s “Origin of Consciousness…” in his speculation on possible explanations:

Auditory hallucinations may be associated with abnormal activation of the primary auditory cortex; this is a subject which needs much more investigation not only in those with psychosis but in the population at large–the vast majority of studies so far have examined only auditory hallucinations in psychiatric patients.

Some researchers have proposed that auditory hallucinations result from a failure to recognize internally generated speech as one’s own (or perhaps it stems from a cross-activation with the auditory areas so that what most of us experience as our own thoughts becomes “voiced”).

Perhaps there is some sort of psychological barrier or inhibition that normally prevents most of us from “hearing” such inner voices as external. Perhaps that barrier is somehow breached or underdeveloped in those who do hear constant voices. Perhaps, however, one should invert the question–and ask why most of us do not hear voices.

In his influential 1976 book, The Origin of Consciousness in the Breakdown of the Bicameral Mind, speculated that, not so long ago, all humans heard voices–generated internally from the right hemisphere of the brain, but perceived (by the left hemisphere) as if external, and taken as direct communications from the gods. Sometime around 1000 B.C., Jaynes proposed, with the rise of modern consciousness, the voices became internalized and recognized as our own…Jaynes thought that there might be a reversion to “bicamerality” in schizophrenia and some other conditions. Some psychiatrists (such as Nasrallah, 1985) favor this idea or, at the least, the idea that the hallucinatory voices in schizophrenia emanate from the right side of the brain but are not recognized as one’s own, and are thus perceived as alien…It is clear that “hearing voices” and “auditory hallucinations” are terms that cover a variety of different phenomena. pp. 63-64

Recently, neuroscientists have hypothesized the existence of something called an “efference copy” which is made by the brain of certain types of stimulus. The presence of the effluence copy informs the brain that certain actions have originated from itself, and that subsequent inputs are self-generated. For example, the efference copy of your hand movements is what prevents you from tickling yourself. The existence of this efferance copy—or rather the lack thereof—has been postulated as the reason why schizophrenics can’t understand the voices in their heads as being their own. A temporary suppression of the efference copy may be behind why so many otherwise “sane” people often hear voices as something coming from outside their own mind.

Efference copy is a neurological phenomenon first proposed in the early 19th century in which efferent signals from the motor cortex are copied as they exit the brain and are rerouted to other areas in the sensory cortices. While originally proposed to explain the perception of stability in visual information despite constant eye movement, efference copy is now seen as essential in explaining a variety of experiences, from differentiating between exafferent and reafferant stimuli (stimulation from the environment or resulting from one’s own movements respectively) to attenuating or filtering sensation resulting from willed movement to cognitive deficits in schizophrenic patients to one’s inability to tickle one’s self.

Efference Copy – Did I Do That? Cody Buntain, University of Maryland (PDF)

I talk to myself all the time. The words I’m typing in this blog post are coming from some kind of “inner self.” But I feel like that inner voice and “me” are exactly the same, as I’m guessing you do too, and so do most of us “normal” people. But is that something inherent in the brain’s bioarchitecture, or is that something we are taught through decades of schooling and absorbing our cultural context? Might writing and education play an important role in the “breakdown” of bicameralism? We’ll take a look at that next time.

The version of bicameralism that seems most plausible to me is the one where the change is a memetic rather than evolutionary-genetic event. If it were genetic, there would still be too many populations that don’t have it, but whose members when plucked out of the wilderness and sent to university seem to think and feel and perceive the world the way the rest of us do.

But integrated, introspective consciousness could be somewhere between language and arithmetic on the list of things that the h. sapien brain has always been capable of but won’t actually do if you just have someone raised by wolves or whatnot. Language, people figure out as soon as they start living in tribes. Arithmetic comes rather later than that. If Jaynes is right, unicameral consciousness is something people figure out when they have to navigate an environment as complex as a bronze-age city, and once they have the knack they teach their kids without even thinking about it. Or other peoples’ kids, if they are e.g. missionaries.

At which point brains whose wiring is better suited to the new paradigm will have an evolutionary advantage, and there will be a genetic shift, but as a slow lagging indicator rather than a cause.

John Schilling – Comment (Slate Star Codex)

[I’m just going to drop this here—much of the cause of depression stems from negative self-talk (“It’s hopeless!;” Things will never get better;” “I’m worthless,”etc.). In such cases, this “inner voice,” rather than being encouraging, is a merciless hector to the depressed individual. As psychologists often point out to their patients, we would never talk to anyone else as callously we talk to ourselves. Why is that? And it seems interesting that there are no references to depression in bicameral civilizations as far as I know. Ancient literature is remarkably free of “despair suicides” (as opposed to suicides for other reasons such as defeat in battle or humiliation).]

The Cathedral of the Mind

What if God was one of us?

What we talk about when we talk about consciousness

Nothing here but spilled chocolate milk.

There are what I call “hard” and “soft” interpretations of Jaynes’s thesis. The “hard” interpretation is exactly what is posited in the book: humans did not have reflexive self-awareness in the way we describe it today until roughly the Bronze Age.

The “soft” interpretation is that a shift in consciousness occurred, quite possibly in the way that Jaynes described it, but that it occurred around 40-70,000 years ago during the Ice Age, long before writing or complex civilizations, when our ancestors will still hunter-gatherers. Another “soft” interpretation is that our ancestors definitely thought differently than we do, but  they were still conscious agents nonetheless, and that the gods and spirits they referred to so often and who seemed to control their lives were merely figments of their imagination.

The Great Leap Forward

The idea that humans experienced some some sort of significant cognitive transformation sometime after becoming anatomically modern is no longer controversial. This is the standard view in archaeology. Scientists call this the transition from anatomically modern humans to behaviorally modern humans. This article has a good summary:

… During the Upper Paleolithic (45,000-12,000 years ago), Homo sapiens fossils first appear in Europe together with complex stone tool technology, carved bone tools, complex projectile weapons, advanced techniques for using fire, cave art, beads and other personal adornments. Similar behaviors are either universal or very nearly so among recent humans, and thus, archaeologists cite evidence for these behaviors as proof of human behavioral modernity.

Yet, the oldest Homo sapiens fossils occur between 100,000-200,000 years ago in Africa and southern Asia and in contexts lacking clear and consistent evidence for such behavioral modernity. For decades anthropologists contrasted these earlier “archaic” African and Asian humans with their “behaviorally-modern” Upper Paleolithic counterparts, explaining the differences between them in terms of a single “Human Revolution” that fundamentally changed human biology and behavior.

Archaeologists disagree about the causes, timing, pace, and characteristics of this revolution, but there is a consensus that the behavior of the earliest Homo sapiens was significantly different than that of more-recent “modern” humans.

Earliest humans not so different from us, research suggests (Science Daily)

What no one knows, however, is what caused it, how it took place, or exactly when and where it took place. But the idea that there could be some kind of drastic cognitive shift without significant physical changes is no longer fringe. As Jared Diamond wrote:

Obviously, some momentous change took place in our ancestors’ capabilities between about 100,000 and 50,000 years ago. That Great Leap Forward poses two major unresolved questions, regarding its triggering cause and it geographic location. As for its cause, I argued in my book The Third Chimpanzee for the perfection of the voice box and hence the anatomical basis for modern language, on which the exercise of human creativity is so dependent. Others have suggested instead that a change in brain organization around that time, without a change in brain size, made modern language possible. Jared Diamond; Guns, Germs and Steel, p.40

Archaeologists tend look at all the things in the archaeological record that indicate that Paleolithic humans were like us (e.g. complex tools, art, body ornamentation, trade, burial of the dead, food storage and preservation), but for some reason they downplay or dismiss all the things that show that, in many ways, they were quite different than us. That is, in some respects, they were not nearly as “behaviorally modern” as we tend to assume.

For example, here are some other things they did during this time period: carve ivory and wooden idols. Make sacrifices to their gods (including mass child sacrifice). Cannibalism. Sleep temples. Build strange statues with eyes and no mouth (eye idols). Practiced astrology. And they regularly poked holes in their skulls for reasons we are still unsure of. In other words, for all the evidence that they thought like us, there is other evidence that suggests that their thinking was substantially different that ours in many ways! But we tend to emphasize the former only, and ignore the latter. This leads to Jaynes’s idea that there may have been more than just one Great Leap Forward, and that human consciousness has changed significantly since the establishment of architectural civilizations.

Let’s take a quick detour into how scientists think the human brain may have developed to gain some insight into whether there may be evidence for bicameralism.

A short digression into brain architecture

The idea that the brain is composed of previous adaptations which have been extended is fairly well accepted. The Triune Brain hypothesis is that we have a “lizard brain”  which controls our base functions like breathing and so forth, and is highly aggressive and territorial. Then we have a rodent (paleomammalian) brain that allow us do more complex social functions such as solve basic problems. Then we have the primate (neomammalian) brain including the neocortex that allows for larger groups and advanced reasoning. This is basically seen as correct in broad strokes, although a vast oversimplification of the complexities of how the primate brain developed

From Primitive Parts, A Highly Evolved Human Brain (NPR)

The brain of an organism cannot just “go down” for maintenance while it upgrades. It has to keep the organism alive and reproducing. So new modules have to be added on the fly to what’s already there ad hoc. This leads to a brain of aggregations which have to mix with older features, much the way legacy computer code is embedded within older software. This, as you can imagine, can lead to “buggy code.”

Archaeologist Steven Mithen wrote a book about the prehistory of the mind—what we might call “cognitive archaeology.” He notes that certain processes seem to come automatically to the brain—like learning language, while others—like multiplying two large numbers together in one’s head, do not. This means that the brains is not like, say, a “general purpose” I/O microcomputer as it’s often described. He writes: “The mind doesn’t just accumulate information and regurgitate it. And nor is it indiscriminate in the knowledge it soaks up. My children—like all children—have soaked up thousands of words effortlessly, but their suction seems to lose its power when it comes to multiplication tables.”

This indicates that the human mind has some inherent, or built-in, propensities, alongside the general intelligence all animals have. That means they may be of evolutionary origin. Spoken language appears to be one of these. While we send our kids to school for years to try and pound algebra, trigonometry and the correct spelling of words into them, children soak up language from their environment shortly after birth with hardly any effort at all.

Noam Chomsky invoked something he called the “poverty of the stimulus” to make this point. He meant that given how fast and accurately children learn language by osmosis, there is no way it comes from “just” environmental inputs, like a computer. Children must be, in some sense, be pre-programmed to learn language, and thus language’s fundamental construction must be related to how the brain functions—something he called a “universal grammar.” Over time, more of these apparently “inherent” behaviors have been identified in humans:

It became increasingly unpopular to assume that a basic understanding of the world can be built entirely from experience. This was in part instigated by theorist Noam Chomsky, who argued that something as complex as the rules of grammar cannot be picked up from exposure to speech, but is supplied by an innate “language faculty.”

Others followed suit and defined further “core areas” in which knowledge allegedly cannot be pieced together from experience but must be innate. One such area is our knowledge of others’ minds. Some even argue that a basic knowledge of others’ minds is not only possessed by human infants, but must be evolutionarily old and hence shared by our nearest living relatives, the great apes.

Children understand far more about other minds than long believed (The Conversation)

This means that, rather than being like a computer or a sponge, argues Mithen, the mind is more akin to a “Swiss army knife,” with different modules for different uses, but all fundamentally a part of the same basic “object.” One study, for example, has found that the ability recognize faces is innate. This explains the human penchant for pareidolia.

You (probably) see a face in this chair, but do you ever see a chair in someone’s face?

Using the Swiss Army knife metaphor, Mithen argues that these various specialized cognitive modules overlap with what he calls “general intelligence.” This overlap between specialized intelligences and the general intelligence leads to a lot of unique features of human cognition such as creativity, socialization, and, perhaps, constructing things like ‘gods’ and the ‘self.’ Here’s a good summary:

Mithen…[argues]…that the mind should … be seen as a series of specialized “cognitive domains” or “intelligences,” each of which is dedicated to some specific type of behavior, such as specialized modules for acquiring language, or tool-using abilities, or engaging in social interaction…his argument will be that that modern human mind has an architecture built up by millions of years of evolution, which finally yielded a mind that creates, thinks, and imagines.

Mithen…highlights recent efforts in psychology to move beyond thinking of the mind as running a general-purpose program, or as a sponge indiscriminately soaking up whatever information is around. A new analogy for the human mind has taken its place: the Swiss army knife, a tool with specialized devices, designed for coping with very special types of problems.

This is found especially in Howard Gardener’s important book Frames of Mind: The Theory of Multiple Intelligences. In this well-known work we are presented with a Swiss-army knife architectural model for the mind, with each “blade,” or cognitive domain, described as a specialized intelligence. Gardener initially identified seven intelligences: linguistic, musical, logical-mathematical, spatial, bodily-kinesthetic, and two forms of personal intelligence (one for looking at on’es own mind, one for looking outward toward others).

Alone in the World? by Wentzel Van Huyssteen, pp. 194-195

Form this, Mithen proposes a new metaphor – that of a cathedral, with a central nave standing in for generalized intelligence, and numerous walled-off enclaves (side chapels) for the various specialized cognitive functions. In a nutshell, Mithen argues that the “walls” between these areas began to break down over time, and the services in the side chapels increasingly blended together with the “main service” taking place in the nave. The mixture gives rise to rise to the various symbolic and metaphorical aspects of human consciousness—what he terms “cognitive fluidity.”

Mithen fills out the three stages in the historical development of the human mind as follows:

In Phase One human minds were dominated by central “nave” of generalized intelligence.

Phase Two adds multiple “chapels” of specialized intelligences, including the cognitive domains of language, social intelligence, technical intelligence, and natural history intelligence.

Phase Three brings us to the modern mind in which the “chapels” or cognitive domains have been connected, resulting in what Mithen calls cognitive fluidity. This creative combination of the various cognitive domains of the mind would ultimately have profound consequences for the nature of the human mind. With this cognitive fluidity, the mind acquired not only the ability for, but also a positive passion for, metaphor and analogy. And with thoughts originating in different domains engaging one another, the result is an almost limitless capacity for imagination.

It is exactly this amazing ability that would make our species so different from early humans who shared the same basic mind – a Swiss army knife of multiple intelligences, but with very little interaction between them.

Mithen’s useful model here, again, is a cathedral with several isolated chapels, within which unique services of thought were undertaken, each barely audible elsewhere in the cathedral. In Mithen’s words: “Early humans seem to have been so much like us in some respects, because they had these socialized cognitive domains; but they seem so different because they lacked the vital ingredient of the modern mind: cognitive fluidity”

[Behavioral modernity] is when “doors and windows were inserted between chapel walls”, when thoughts and information began flowing freely among the diverse cognitive domains or intelligences. Specialized intelligences no longer had to work in isolation, but a a”mapping across knowledge systems” now became possible, and from this “transformation of conceptual spaces” creativity could now arise as never before.

Mithen thus appropriates some of the work of cognitive psychologists, to make the related point that in both development and evolution the human mind undergoes (or has undergone) a transformation from being constituted by a series of relatively independent cognitive domains to a situation in which ideas, ways of thinking, and knowledge now flow freely between such domains. This forms the basis for the highly plausible hypothesis that during this amazing emergent period of transition, the human brain was finally hardwired for cognitive fluidity, yielding imagination and creativity.

Alone in the World? by Wentzel Van Huyssteen pp. 195-197

And modern scientific investigation tends to back these ideas up:

The ability to switch between networks is a vital aspect of creativity. For instance, focusing on a creative puzzle with all of your attention might recruit the skills of the executive attention network. On the other hand, if the creative task involves producing a sonically pleasing guitar solo, focus might be switched from intense concentration to areas more involved in emotional content and auditory processing.

The neuroscience of creativity (Medical News Today)

It is this mixing of intelligences – this cognitive fluidity, that gives rise to language and symbolic thinking. Incremental at first, the increasing blending of these intelligences gives rise to language and symbolic thought over time. This leads to the “Great Leap Forward” seen in the archaeological record:

Of critical importance here is also a marked change in the nature of consciousness. Mithen has argued that reflexive consciousness evolved as a critical feature of social intelligence, as it enabled our ancestors to predict the behavior of other individuals. He then makes the point that there is now reason to expect early humans to have had an awareness about their own knowledge and thought processes concerning the nonsocial world. Via the mechanism of language, however, social intelligence began to be invaded by nonsocial information, and the nonsocial world becomes available for reflexive consciousness to explore…Consciousness then adopted the role of a comprehensive, integrating mechanism for knowledge that had previously been “trapped” in specialized intelligences.

The first step toward cognitive fluidity appears to have been integration between social and natural history intelligence in early modern humans around 100,000 years ago. The final step to full cognitive fluidity, the potential to entertain ideas that bring together elements from normally incongruous domains, occurred at different times in different populations between 60,000 and 30,000 years ago. This involved an integration of technical intelligence, and led to the cultural explosion we are now calling the appearance of the human mind.

…As soon as language started acting as a vehicle for delivering information into the mind, carrying with it snippets of nonsocial information…[it] now switched from a social to a general-purpose function, consciousness from a means to predict other individuals’ behavior to managing a mental database of information relating to all domains of behavior…Mithen’s most interesting point here is that some metaphors and analogies can be developed by drawing on knowledge within a single domain, but the most powerful ones are those that cross domain boundaries. By definition these kinds of metaphors can arise only within a cognitively fluid mind… Alone in the World, pp.197-199

Yes, but were they conscious? There’s the rub. Is artwork proof of reflective self-consciousness? Are burials proof of such? Clearly tool use alone is not, as we’ve seen. And some of the most vibrant artwork has been done by schizophrenics.

Like Mithen, Jaynes also calls attention to the vital role of language and metaphor in cognitive fluidity and reflective self-consciousness. Even ‘the self’ itself is a metaphor!

…The most fascinating property of language is its capacity to make metaphors … metaphor is not a mere extra trick of language…it is the very constitutive ground of language. I am using using metaphor here in its most general sense: the use of a term for one thing to describe another because of some kind of similarity between them of between their relations to other things.

There are thus always two terms in a metaphor, the thing to be described, which I shall call the metaphrand, and the thing or relation used to elucidate it, which I shall call the metaphier. A metaphor is always a known metaphier operating on a less known metaphrand.

It is by metaphor that language grows. The common reply to the question “what is it?” is, when the reply is difficult, or the experience unique, “well, it is like –.” In laboratory studies, both children and adults describing nonsense objects (or metaphrands) to others who cannot see them use extended metaphriers that with repetition become contracted onto labels. This is the major way in which the vocabulary of language is formed. The grand and vigorous function of metaphor is the generation of new language as it is needed, as human culture becomes more and more complex.

It is not always obvious that metaphor has played this all-important function. But this is because the concrete metaphiers become hidden in phonemic change, leaving the words to exist on their own. Even such an unmetaphorical-sounding word as the verb ‘to be’ was generated from a metaphor. it comes from the Sanskrit bhu, “to grow, or to make grow,” while the English forms ‘am’ and ‘is’ have evolved from the same root as the Sanskrit asmiy “to breathe.”

It is something of a lovely surprise that the irregular conjugation of our most nondescript verb is thus a record of a time when man had no independent word for ‘existence’ and could only say that something ‘grows” or that it ‘breathes.’ Of course we are not conscious that the concept of being is thus generated from a metaphor about growing and breathing. Abstract words are ancient coins whose concrete images in the busy give-and-take of talk have worn away with use. pp. 48-51

The ancient Greeks at the time of Homer lacked a word for blue; they referred to the Mediterranean Sea, for example, as “wine-colored” (οἶνοψ). The brilliant hues of the Mediterranean sunrise are famously described as “rosy-fingered” (ῥοδοδάκτυλος), and so forth. Wikipedia even has a list of them. A similar concept in Old Norse is called kenning (e.g blood = “battle sweat”).

In reading ancient texts, it is one of the rare opportunities we have to look upon a worldview entirely alien to us. The ancients described physical appearances in some ways that seem bizarre to the modern sensibility. Homer says the sea appears something like wine and so do sheep. Or else the sea is violet, just as are oxen and iron. Even more strangely, green is the color of honey and the color human faces turn under emotional distress. Yet no where in the ancient world is anything blue for no word for it existed. Things that seem blue to us are either green, black or simply dark in ancient texts.

Also, things like subjective perspective and experience are lacking. Even body parts are regularly described as having their own minds. And voices are supposedly heard in the external world, command voices telling people what to do, while voices aren’t described as being heard within the head. There is no ancient word or description of a fully internalized sense of self.

It’s hard to know what to make of all this. There are various theories that attempt to explain it. But the main takeaway is that our common sense assumptions are false. There is something more to human nature and human society than we at present experience and understand. As a species, we are barely getting to know ourselves.

Benjamin David Steele (Facebook post)

Note that in our discussion above, even our descriptions of the mind rely upon metaphors (“Swiss army knife,” “cathedral”) and spatialization (“leaping forward”).

Finally, there was a German theorist of religion named Max Müller who saw the origin of what we call ‘gods’ in the way that humans naturally tend to conceive of things they do not understand metaphorically. His theories have all but been forgotten, but I think they fit nicely with the idea that in order to comprehend certain natural phenomena, ancient peoples resorted to assigning them the category ‘god,’ even when they knew, for instance, that the sun was not literally the chariot of Apollo, or that lightning bolts were not literally thrown by Zeus. Keep in mind, what we think of when we hear the word ‘god’ in our rationalist, materialistic, monotheistic-influenced culture is probably so different than what the ancient people using it at the time meant, that we moderns cannot even conceive of what they had in mind. Here’s E. E. Evans-Pritchard describing Muller’s theories:

In [Müller’s] view, as I understand it, men have always had an intuition of the divine, the idea of the infinite–his word for God–deriving from sensory experience…Now, things which are intangible, like the sun and the sky, gave men the idea of the infinite and also furnished the material for deities…Müller did not wish to be understood as suggesting that religion began by men deifying natural objects, but rather that these gave him a feeling of the infinite and also served as symbols for it.

Müller was chiefly interested in the gods of India and of the classical world…His thesis was that the infinite, once the idea had arisen, could only be thought of in terms of metaphor and symbol, which could only be taken from what seemed majestic in the known world, such as the heavenly bodies, or rather their attributes. But these attributes then lost their original metaphorical sense and achieved autonomy by becoming personified as deities in their own right. The nomina became numina.

So religions, of this sort at any rate,might be described as a ‘disease of language’, a pithy but unfortunate expression which later Muller tried to explain away but never quite lived down. It follows, he held, that the only way we can discover the meaning of the religion of early man is by philological and etymological research, which restores to the names of the gods and the stories told about them their original sense.

Thus, Apollo loved Daphne; Daphne fled before him and was changed into a laurel tree. This legend makes no sense till we know that originally Apollo was a solar deity, and Daphne, the Greek name for the laurel, or rather the bay tree, was the name for the dawn. This tells us the original meaning of the myth: the sun chasing away the dawn.

E.E. Evans-Pritchard – Theories Of Primitive Religion, pp. 21-22

What We Talk About When We Talk About Consciousness

Previously: What If God Was One Of Us?

Last time we discussed the radical the idea that “consciousness” arose relatively late in human history, roughly around the time of the Late Bronze Age Collapse in the Mediterranean.

Now, its important to understand that when Jaynes uses the term “consciousness, he is talking about something very specific. It’s not simply being responsive to one’s exterior surroundings (sense perception), but being aware of them and filtering them though a some kind of “inner life”. Jaynes contends that this sort of meta-awareness arrived relatively late in human history, and that we can pinpoint this change in comprehension through a careful reading of ancient literature, especially sacred literature and epic poetry.

Think of it this way: you see an apple; the color waves hit your eyes, which send signals to your brain via the optic nerve. You “choose” to reach out and grasp it. A nerve signal goes out from the brain to your arm and hand. The apple is touched. Nerve fibers in the hand sends signals from the hand to the brain, describing the temperature, texture, firmness, and so forth. All of these signals are processed various areas of the brain which we can see by the neurons firing in those areas in an fMRI scan.

Jaynes isn’t talking about any of that stuff. That’s the process of sense perception. He’s talking about something else entirely. As Marcel Kuijsten of the Julian Jaynes society describes:

[2:30 -3:57] “In a nutshell, what Jaynes argues is that, as humans evolved language, along with language the brain was using language to then convey experience between the two hemispheres which were operating in a, let’s say, a less integrated fashion then they are today.”

This idea is a little shocking to people initially, because behavior was then directed by what we today call an auditory hallucination. But there’s a lot of evidence that he presents for this. The ancient literature is filled with all of these examples of people’s behavior being directed by what they interpreted as the gods, idols that they used to illicit these commands, and just quite a bit of evidence that he gets into explaining all this.”

“From that he realized that consciousness was not what people generally assume to be a biologically innate, evolved process, but it was something that was learned, and it was based on language. So after language got to a level of complexity, then we developed this ability to introspect. So he places the date for the development of consciousness much more recently than traditional ideas.

“[10:18] Most of the critiques of the theory are based on misconceptions…[11:04] The most common mistake is that they are criticizing what Jaynes is saying based on their own view of consciousness rather than how Jaynes defines it. And consciousness is defined so differently by so many people that when you go to conferences on consciousness you see all these people giving lectures and they’re all really defining it in very, very different ways.”

Julian Jaynes and the Bicameral Mind Theory (This View of Life Magazine)

Jaynes himself acknowledges the inherent difficulty of using our own conscious mind to come to an intellectual reckoning of, well, itself!

Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of. How simple is that to say; how difficult to appreciate! It is like asking a flashlight in a dark room to search around for something that does not have any light shining upon it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so consciousness can seem to pervade a mentality when actually it does not. p. 23

Again, consciousness is not simply the sense perception of the world around you. It’s not required to do basic things like eat, sleep or have sex. It’s not even necessary for talking. Chimpanzees (and gorillas) have been taught to “talk” using sign language. Unless we attribute reflective self-consciousness to great apes, then clearly language—in terms of expressing simple desires and notions about the world using nouns and verbs—is not, strictly speaking, only an act that only conscious beings can do; at least how Jaynes is describing it. All animals communicate in some fashion, whether they are self-conscious or not.

Also, it’s thought that language actually evolved in humans primarily for gossip, and that gossip evolved as a method of social bonding and coalition building, and not, please note, for ruminative thought or reflective self-awareness:

Human language didn’t evolve to name things but for gossip — our equivalent of primates grooming — which developed to maintain the bonds of trust in the ever growing social groups our ancestors formed to protect themselves against predators as they moved ‘out of Africa’ to survive…We continue to gossip today — approximately 65% of modern talking time is taken up by it, irrespective of age, gender or culture. The topics tend to be extreme events (both good and bad) that we struggle to make sense of alone. By engaging our peers we are better able to understand and act in the world around us.

The Problematic Storytelling Ape (Medium)

Nor is consciousness strictly necessary for a large-scale social organization to develop. For example, there are many examples of eusocial and prosocial species among earth’s various types of animals. Ants, bees, and wasps are among the most successful animal species on the planet, engaging in agriculture, building large nests, raising each other’s young, engaging in organized war, and living in vast “cities.” Are the hymnoptera conscious in the same way humans are? It’s highly doubtful. And yet they live in complex societies and many of their behaviors are similar.

“I’ll take the example of the leaf cutter ant,” [economics professor Lisi] Krall explained … “They cut and harvest leaves, and then they feed the leaves to their fungal gardens, and they themselves then feed on the fungal gardens,” she said. The ants “develop into vast, vast colonies that have highly developed, profound divisions of labor.” Sound familiar?…”We engaged a kind of social evolution, that started with agriculture, that put us on a path of expansion and interconnectedness and ultimately, in humans, hierarchy, and all that kind of stuff,” she said.

Humans are more like ants than lone wolves (Treehugger)

Even writing existed for thousands of years as simply a mnemonic device for recording straightforward things like genealogies and inventories—”lists, lists and more lists,” as James C. Scott put it. There’s no indication that writing, strictly speaking, requires self-consciousness.

Agriculture, villages, towns, even cities and empires arose without the benefit of writing. The earliest forms of cuneiform writing consisted of clay tablets recording market transactions and tax records with [no] moral, political or legal lessons for future generations… These were mnemonic devices, no better and no worse than a string tied around the finger or rather more sophisticated sets of knots created by the Incans [sic]. The tablets circulated as bills of exchange, carrying a symbolic value as money rather than a historical value as something-to-be-preserved. Their symbolic function served, the tablets were simply thrown away in the trash. Daniel Lord Smail, On Deep History and the Brain p. 57

Animals have also constructed dwellings like hives, mounds, and nests, and made artwork: “Animal-made works of art have been created by apes, elephants, cetacea, reptiles, and bowerbirds, among other species.” (Wikipedia)

Chimpanzee wins $10,000 prize for abstract painting (The Guardian)

It used to be thought that reflexive self-consciousness was necessary for any sort of complex culture to exist, and that cumulative cultural evolution was something unique to humans. However, in 2014 researchers managed to induce cumulative cultural evolution in baboons. In 2017, it was found that homing pigeons can also gather, pass on and improve knowledge over generations. Then, whales and dolphins (cetaceans) were added to the mix. Then came migrating ungulates (hoofed mammals). Last year, researchers even detected evidence of it among fruit flies!

Primatologists have taken to regularly attributing the differences in chimpanzee behavior in various troops across Africa to “culture” rather than biological instinct. And tool use has been documented in a wide number of animals:

The suggestion that humanity is distinct by virtue of possessing a culture subject to Lamarckian evolution is more problematic than it may appear. The glitch lies in the fact that humans are no longer considered to be the only species to possess culture.

The idea that other animals have culture has been circulating for nearly three decades and has reached a point of media saturation that partially obscures the challenge created by the fact of animal culture. Although early studies focused on the apes and monkeys who make tools and wash sweet potatoes, culture does not end with primates.

Birds’ songs and migration routes are learned and transmitted culturally rather than genetically. Some groups of dolphins manipulate sponges to protect their noses while foraging and teach the practice to their offspring. The crows of New Caledonia clip twigs to create hooked tools that are used to retrieve insects from crevices. As with chimpanzees, the types of tools used by crows vary from one group to the next, suggesting that the very use of tools is transmitted through culture. Daniel Lord Smail: On Deep History and the Brain; p. 87

So, more and more, we are finding that self-reflective consciousness is not strictly necessary for many of the behaviors we used to think were uniquely human. Cumulative cultural evolution was there all along just waiting for us to find it! To a drastically lesser degree than us, of course, but it was there nevertheless. We were just too arrogant and self-absorbed to look properly.

So, then, what exactly do we mean when we talk about consciousness? Unless we consider baboons, chimps, orangutans, dolphins, whales, pigeons, crows, bighorn sheep, ants, termites and fruit flies as all conscious the way we are, we must look elsewhere, or else redefine what it is that we are truly searching for in the first place.

What is does not mean is what’s usually called “operant conditioning.” All animals are capable that. Jaynes himself dismisses ideas of operant conditioning as indicators of the type of consciousness that characterizes human beings. After describing standard experiments in which he “taught” everything from plants to microbes to reptiles to complete various tasks, he realized this had nothing whatsoever to do with the type pf conscious behavior he was looking for:

It was, I fear, several years before I realized that this assumption makes no sense at all. When we introspect, it is not upon any bundle of learning processes, and particularly not the types of learning denoted by conditioning and T-mazes. Why then did so many worthies in the lists of science equate consciousness and learning? And why had I been so lame of mind as to follow them?…

It is this confusion that lingered unseen behind my first struggles with the problem, as well as the huge emphasis on animal learning in the first half of the twentieth century. But it is now absolutely clear that in evolution the origin of learning and the origin of consciousness are two utterly separate problems…

Is consciousness…this enormous influence of ideas, principles, beliefs over our lives and actions, really derivable from animal behavior? Alone of species, all alone! We try to understand ourselves and the world. We become rebels or patriots or martyrs on the basis of ideas. We build Chartres and computers, write poems and tensor equations, play chess and quartets, sail ships to other plants and listen to other galaxies – what have these to do with rats in mazes or the threat displays of baboons? The continuity hypothesis of Darwin for the evolution of the mind is a very suspicious totem of evolutionary mythology… pp. 7-8

The chasm is awesome. The emotional lives of men and other mammals are indeed marvelously similar, but to focus upon the similarity unduly is to forget that such a chasm exists at all. The intellectual life of man, his culture and history and religion and science, is different from anything else we know of in the universe. That is fact. It is as if all life evolved to a certain point, and then in ourselves turned at a right angle and simply exploded in a different direction. p.9

Jaynes controversially rejects the idea that consciousness is necessarily a part of human thinking and reasoning, as we commonly assume it must be. He cites the work of the Würzburg School of psychology in Germany and their discovery of so-called “imageless thoughts.”

The essential point here is that there are several stages of creative thought: first, a stage of preparation in which the problem is consciously worked over then a period of incubation without any conscious concentration upon the problem; and then the illumination which is later justified by logic. The parallel between these important and complex problems and the simple problems of judging weights or the circle-triangle series is obvious. The period of preparation is essentially the setting up of a complex situation together with conscious attention to the materials on which the striction is to work. But then the actual process of reasoning, the dark leap onto huge discovery, just as in the simple trivial judgement of weights, has no representation in consciousness. Indeed, it is sometimes almost as if the problem has to be forgotten to be solved. p. 44

Jaynes points out that not only is consciousness not necessary for performance of routine daily tasks, it can actually be counterproductive! Self-conscious reflection puts us on notice of “watching ourselves” from an observer’s point of view, and thus our performance often degrades. That is, we involve our “ego self” in what we happen to be doing at the moment. You can see this all the time with athletes. Once they start to want to win, they trip up and stop winning. The best sports actions are performed without a certain lack of self-reflection (dare we say, a lack of conscious introspection) leading to a sense of spontaneity. We might almost call it a trance, as in the Taoist tale of the dexterous butcher. There is a word for this “non-conscious” state in Chinese philosophy: wu-wei, or non-action. Ted Slingerland, and expert in ancient Chinese philosophy has written a whole book about this concept called Trying Not to Try.

It’s clearly a different sort of consciousness that Jaynes is after. It is something uniquely human, but we don’t seem to be able to find it anywhere we look, except by degrees of magnitude over various other animals. Even art, culture, building, reasoning and communication are not immune!

Nor does the “self” or “consciousness” have any sort of fixed anatomical location, inside your noggin or anywhere else for that matter, as we seem to assume. Many ancient peoples located their conscious selves in the heart, not in the head. The ancient Greeks did so, seeing the brain as merely a cooling organ for blood, like a car radiator. Out-of-body experiences also testify that consciousness can locate itself anywhere, not even within the physical body itself!

Where does consciousness take place? Everyone, or almost everyone, immediately replies, in my head. This is because when we introspect, we seem to look inward on an inner space somewhere behind our eyes. But what on earth do we mean by ‘look’? We even close our eyes sometimes to introspect even more clearly. Upon what? Its spatial character seems unquestionable…

We not only locate this space of consciousness inside our own heads. We also assume it is there in others’. In talking with a friend, maintaining periodic eye-to-eye contact (that remnant of our primate past where eye-to-eye contact was concerned in establishing tribal hierarchies), we are always assuming a space between our companion’s eyes into which we are talking, similar to the space we imagine inside out own heads where we are talking from.

And this is the very heartbeat of the matter, for we all know perfectly well that there is no such space in anyone’s head at all! There is nothing inside my head or yours except physiological tissue of one sort or another. And the fact that it is predominantly neurological tissue is irrelevant. pp. 44-45

Let us not make a mistake. When I am conscious, I am always and definitely using certain parts of my brain inside my head. But so am I when riding a bicycle, and the bicycle riding does not go on inside my head. The cases are different of course, since bicycle riding has a definite geographical location, while consciousness does not. In reality, consciousness has no location whatever except as we imagine it has. p. 46

In the end Jaynes concludes with regard to consciousness:

We have been brought to the conclusion that consciousness is not what we generally think it is. It is not to be confused with reactivity. It is not involved in hosts of perceptual phenomena. It is not involved in the performance of skills and often hinders their execution. It need not be involved in speaking, writing, listening, or reading. It does not copy down experience, as most people think. Consciousness is not at all involved in signal learning, and need not be involved in the learning of skills or solutions, which can go on without any consciousness whatever. It is not necessarily for making judgements or in simple thinking. It is not the seat of reason, and indeed some of the most difficult instances of creative reasoning go on without any attending consciousness and it has no location except and imaginary one! The immediate question therefore is, does consciousness exist at all? pp. 46-47

Jaynes concludes that it does, but to understand what he means, we have to start thinking about it in a totally different way. And for that reason, we can’t find it simply by studying physical processes in the brain. We need to engage in a bit of existentialist philosophy:

The trick to understanding his model is first understanding what he means by “consciousness”. I don’t think he means what most of us mean when we talk about say the “hard problem” of consciousness. In modern considerations of consciousness, I think we largely refer to subjective experience – the “what it is like” to be aware of the world. Jaynes however dismisses this as mere sensory perception. He is more interested in what it is to have an internal “mindspace”, an “analog I” that experiences the world. Jaynes argues for the emergence of this sense of self and an inner mindspace from language. He sees the infinite capacity for metaphor inherent in human language as a means by which we can build similarly infinite concepts and ideas about our relationship with the external world.

That is, when we introspect upon our experience as selves in the world, we construct an inner self, an “I” that exists within our mind’s eye which is what it is that has these experiences, these relationships. This inner self is an analog for what our senses perceive and how we react and is what gives us a sense of the first person in how we view the world. I guess Jaynes is thinking here of some kind of conscious interiority, a feeling of being “in here” rather than “out there” (or perhaps nowhere at all).

Jaynes observes (as have many others) that this kind of awareness rests upon language. Human language has two distinctive features – the capacities for metaphorical representation and infinite recursion. With these basic tools, human beings can build infinitely complex models of self and experience. We can also use language to communicate – share – these models. In fact, over time it is this sharing that helps to construct commonly held models of experience that shape the course of cultural progress.

Julian Jaynes and the Analog “I” (Science Philosophy Chat Forums)
The key to this is how the brain uses language to construct the self:

It is through language that we construct models of the self and through translation of our intuitions into words and ideas that we learn the limits of this language and the limits of our own particular perspective.

Through language we learn to differentiate between ourselves and others from a young age even if consciousness is not a concept that we ever learn explicitly or ever truly “know” our self.

It is in natural language — the spoken word, novels, poetry, vague metaphorical speech, descriptions of made-up things like love and self and consciousness — that we have our greatest tool to share our subjective experiences. A powerful tool to build a common roadmap to create better selves.

The self may be a fiction but in that case it is all the more vital that we embrace fiction, and by extension natural language, to communicate with each other at an ever deeper level.

The Problematic Storytelling Ape (Medium)

Thus, language is crucial in constructing the “self” i.e. the concept of the individual “I” that we normally all carry around all day inside our heads—the homumculus who has no material existence we feel like is “in there” somewhere. But—it’s important to note—the mere presence of language and writing by itself does not necessarily indicate that such introspective thinking exists. Rather, the self—the analog I—is a “concept” that utilizes our innate capacity for language, but is independent of it:

The analogue-I and analogue-me refer to mental self-relevant images that take a first-person vs. third-person perspective, respectively. Mental self-analogues are essential for goal setting, planning, and rehearsal of behavioral strategies, but they often fuel emotional and interpersonal problems when people react to their analogue selves as if they were real.

The Analogue-I and the Analogue-Me: The Avatars of the Self (Self and Identity)

Behavioral scientists have studied how this self interacts with the world. In fact, behavioral science has confirmed that there is not one, unitary “self” consistent over time, but multiple selves! In fact, these selves are often present at the same time, although separated in space. This mindblowing idea alone should cause us to reject the idea that the self is just a biological process inside our heads and not a mental construct. In a recent study on willpower, the authors of the study propose a conflict between multiple overlapping selves: “Simply put, you in the present is different than you in the future.” (Treehugger)

The second class of models posits multiple coexisting selves. This view holds that decision makers behave as if they were a composite of competing selves with different valuation systems and different priorities.

One “self” craves instant gratification (e.g., “I want to eat a cheeseburger! Yum!”), whereas another “self” is focused on maximizing long-term outcomes (e.g., “I want to eat a salad and be healthy!”). Self-control conflicts are the consequence of a present-oriented valuation system disagreeing with a future-oriented valuation system

…Evidence for multiple system models comes from functional MRI (fMRI) studies showing that self-controlled choices were associated with lateral prefrontal areas of the brain, whereas more impulsive choices were associated with the ventral striatum and ventromedial prefrontal cortex.

Beyond Willpower: Strategies for Reducing Failures of Self-Control (Sage Journals)

Given all this, Jaynes finally lists what he believes are the core characteristics of the kind of human introspective consciousness awareness he’s talking about:

1. Spatialization – We tend to describe reality in terms of spatial visualization. “If I ask you to think of the last hundred years, you may have a tendency to excerpt the matter in such a way that the succession of years is spread out, probably from left to right. But of course there is no left or right in time. There is only before and after, and these do not have any spatial properties whatever – except by analog. You cannot, absolutely cannot think of time except by spatializing it. Consciousness is always a spatialization in which the diachronic is turned into the synchronic, in which what has happened in time is excerpted and seen in side-by-sideness.” p. 60

2. Excerption (screening, or filtering) Our perception of our reality is necessarily limited. “In consciousness, we are never ‘seeing’ anything in its entirely…we excerpt from the collection of possible attentions to a thing which compromises our knowledge of it. And this is all that is possible to do since consciousness is a metaphor of our actual behavior.”

3. The Analog ‘I’“…the metaphor we have of ourselves which can move about vicarially in our imagination doing things we are not actually doing…In the example of…spatialization, it was not your physical behavioral self that was trying to ‘see’ where my theory ‘fits’ into the array of alternative theories. It was your analog ‘I'” pp. 62-63

4. The Metaphor ‘Me’“We can both look out from within the imagined self at the imagined vistas, or we can step back a bit and see ourselves perhaps kneeling down for a drink of water at a particular brook.”

5.Narratization: We construct narratives to understand the world: “In our consciousness we are always seeing our vicarial selves as the main figures in the stories of our lives. In the above illustration, the narratization is obvious, namely, walking along a wooded path. But it is not so obvious that we are constantly doing this whenever we are being conscious, and this I call narratization.”

6. Conciliation: We comprehend new things by fitting them within established patterns. “…a slightly ambiguous perceived object is made to conform to some previously learned schema, an automatic process sometimes called assimilation. We assimilate a new stimulus into our conception, or schema about it, even though it is slightly different…assimilation consciousized is conciliation. In conciliation we are making excerpts or narratizations compatible with each other, just as in external perception the new stimulus and the internal conception are made to agree…”

To this I would also add that the human mind seems to have an inherent instinct for meaning or purpose. It tends to be quite good at self-deception. And, we’ll later explore the human mind’s ability for recursion.

To get some clues about how this all developed, we”ll take a look at some theories of how the modern human brain evolved from earlier hominins next time.

BONUS: Robert Sapolsky: Are Humans Just Another Primate?