The Voices in Your Head

What If God Was One Of Us?

What We Talk About When We Talk About Consciousness

The Cathedral of the Mind

One of Oliver Sacks’ last popular books, published in 2012, was about hallucinations, titled, appropriately, Hallucinations. In it, he takes a look at numerous types of hallucinatory phenomena—hallucinations among the blind (Charles Bonnett Syndrome); sensory deprivation; delirium; grieving; Post-traumatic Stress Disorder; epilepsy; migraines; hypnagogia; Parkinson’s Disease, psychedelic usage; religious ecstasy; and so on.

There are a number of interesting facts presented about auditory hallucinations.
One is the fact that although auditory hallucination of voices is indeed indicative of schizophrenia, in most cases auditory hallucinations are experienced by perfectly normal people with no other signs of mental illness.

Sacks begins his chapter on auditory hallucinations by describing an experiment in 1973 where eight “fake” patients went to mental hospitals complaining of hearing voices, but displaying no other signs of mental illness or distress. In each case, they were diagnosed as schizophrenic (one was considered manic-depressive), committed to a facility for two months, and given anti-psychotic medication (which they obviously did not take). While committed, they even openly took notes on their experiences, yet none of the doctors or staff ever wised up to the ruse. The other patients, however, were much more perceptive. They could clearly see that the fake patients were not at all mentally ill, and even asked them, “what are you doing here?” Sacks concludes:

This experiment, designed by David Rosenhan, a Stanford psychologist (and himself a pseudopatient), emphasized, among other things, that the single symptom of “hearing voices” could suffice for an immediate, categorical diagnosis of schizophrenia even in the absence of any other symptoms or abnormalities of behavior. Psychiatry, and society in general, had been subverted by the almost axiomatic belief that “hearing voices” spelled madness and never occurred except in the context of severe mental disturbance. p. 54

While people often mischaracterize Jaynes’ theory as “everyone in the past was schizophrenic,” it turns out that even today most voices are heard by perfectly normal, otherwise rational, sane, high-functioning people. This has been recognized for over a century in medical literature:

“Hallucinations in the sane” were well recognized in the nineteenth century, and with the rise of neurology, people sought to understand more clearly what caused them. In England in the 1880s, the Society for Psychical Research was founded to collect and investigate reports of apparitions or hallucinations, especially those of the bereaved, and many eminent scientists—physicians as well as physiologists and psychologists—joined the society (William James was active in the American branch)…These early researchers found that hallucinations were not uncommon among the general population…Their 1894 “International Census of Waking Hallucinations in the Sane” examined the occurrence and nature of hallucinations experienced by normal people in normal circumstances (they took care to exclude anyone with obvious medical or psychiatric problems). Seventeen thousand people were sent a single question:

“Have you ever, when believing yourself to be completely awake, had a vivid impression of seeing or being touched by a living being or inanimate object, or of hearing a voice, which impression, as far as you could discover, was not due to an external physical cause?”

More than 10 percent responded responded in the affirmative, and of those, more than a third heard voices. As John Watkins noted in his book Hearing Voices, hallucinated voices “having some kind of religious or supernatural content represented a small but significant minority of these reports.” Most of the hallucinations, however, were of a more quotidian character. pp. 56-57

While the voices heard by schizophrenics are often threatening and controlling, the voices heard by most people do not appear to have any effect on normal functioning at all.

The voices that are sometimes heard by people with schizophrenia tend to be accusing, threatening, jeering, or persecuting. By contrast, the voices hallucinated by the “normal” are often quite unremarkable, as Daniel Smith brings out in his book Muses, Madmen, and Prophets: Hearing Voices and the Borders of Sanity. Smith’s own father and grandfather heard such voices, and they had different reactions. His father started hearing voices at the age of thirteen. Smith writes:

“These voices weren’t elaborate, and they weren’t disturbing in content. They issued simple commands. They instructed him, for instance, to move a glass from one side of the table to another or to use a particular subway turnstile. Yet in listening to them and obeying them his interior life became, by all accounts, unendurable.”

Smith’s grandfather, by contrast, was nonchalant, even playful, in regard to his hallucinatory voices. He described how he tried to use them in betting at the racetrack. (“It didn’t work, my mind was clouded with voices telling me that this horse could win or maybe this one is ready to win.”) It was much more successful when he played cards with his friends. Neither the grandfather nor the father had strong supernatural inclinations; nor did they have any significant mental illness. They just heard unremarkable voices concerned with everyday things–as do millions of others. pp. 58-59

To me, this sounds an awful lot like Jaynes’s descriptions of the reality of bicameral man doesn’t it? The voices command, and the people obey the commands. Yet they still are outwardly normal, functioning individuals. You may never know whether someone is obeying voices in their head unless they explicitly told you:

This is what Jaynes calls “bicameral mind”: one part of the brain (the “god” part) evaluates the situation and issues commands to the other part (the “man” part) in the form of auditory and, occasionally, visual hallucinations (Jaynes’ hypothesises that the god part must have been located in the right hemisphere, and the man part, in the left hemisphere of the brain). The specific shapes and “identities” of these hallucinations depend on the culture, on what Jaynes calls “collective cognitive imperative”: we see what we are taught to see, what our learned worldview tells us must be there.

Julian Jaynes and William Shakespeare on the origin of consciousness in the breakdown of bicameral mind (Sonnets in Colour)

In most of the cases described in Hallucinations, people didn’t attribute their auditory or visual hallucinations to any kind of supernatural entity or numinal experience. A few did refer to them as “guardian angels”. But what if they had grown up in a culture where this sort of thing was considered normal, if not commonplace, as was the case for most of ancient history?

Hearing voices occurs in every culture and has often been accorded great importance–the gods of Greek myth often spoke to mortals, and the gods of the great monotheistic traditions, too. Voices have been significant in this regard, perhaps more so than visions, for voices, language, can convey an explicit message or command as images alone cannot.

Until the eighteenth century, voices—like vision—were ascribed to supernatural agencies: gods or demons, angels or djinns. No doubt there was sometimes and overlap between such voices and those of psychosis or hysteria, but for the most part, voices were not regarded as pathological; if they stayed inconspicuous and private, they were simply accepted as part of human nature, part of the way it was with some people. p. 60

In the book The Master and His Emissary, Iain McGilchrist dismisses Jaynes’s theory by claiming that schizophrenia is a disease of recent vintage, and only emerged sometime around the nineteenth century. Yet, as the Jaynes foundation website points out (2.7), this is merely when the diagnosis of schizophrenia was established. Before that time, it would not have been considered pathological or a disease at all. We looked at how Akhnaten’s radical monotheism was possibly inspired by God “speaking” directly to him, issuing commands to build temples, and so forth. Certainly, he thought it was, at any rate. And he’s hardly alone. We’ve already looked at notable historical personages like Socrates, Muhammad, Joan of Arc, and Margery Kempe, and there are countless other examples. Schizophrenia is no more “new” than is PTSD, which was barely recognized until after Wold War One, where it was called “shellschock.”

“My Eyes in the Time of Apparition” by August Natterer. 1913

Another thing Sacks points out is that command hallucinations tend to occur in stressful situations or times of extreme duress, or when one has some sort of momentous or climactic decision to make, just as Jaynes posited. In times of stress, perfectly ordinary, sane people often hear an “outside” voice coming from somewhere guiding their actions. This is, in fact, quite common. In “normal” conditions we use instinct or reflex to guide our actions. But in emergencies, we hear a voice that seems to come from somewhere outside our own consciousness:

If, as Jaynes proposes, we take the earliest texts of our civilisation as psychologically valid evidence, we begin to see a completely different mentality. In novel and stressful situations, when the power of habit doesn’t determine our actions, we rely on conscious thinking to decide what to do, but, for example, the heroes of Iliad used to receive their instructions from gods — which would appear in the times of uncertainty and stress.

Julian Jaynes and William Shakespeare on the origin of consciousness in the breakdown of bicameral mind (Sonnet in Colour)

For example, Dr. Sacks is perfectly aware that his “inner monologue” is internally generated. Yet in a stressful situation, the voice became externalized—something that seemed to speak to him from some outside source:

Talking to oneself is basic to human beings, for we are linguistic species; the great Russian psychologist Lev Vygotsky thought that inner speech was a prerequisite of all voluntary activity. I talk to myself, as many of us do, for much of the day–admonishing myself (“You fool! Where did you leave your glasses?”), encouraging myself (“You can do it!”), complaining (“Why is that car in my lane?”) and, more rarely, congratulating myself (“it’s done!”). Those voices are not externalized; I would never mistake them for the voice of God, or anyone else.

But when I was in danger once, trying to descend a mountain with a badly injured leg, I heard an inner voice that was wholly unlike my normal babble of inner speech. I had a great struggle crossing a stream with a buckled and dislocating knee. The effort left me stunned, motionless for a couple of minutes, and then a delirious languor came over me, and I thought to myself, Why not rest here? A nap maybe? This was immediately countered by a strong, clear, commanding voice, which said, “You can’t rest here—you can’t rest anywhere. You’ve got to go on. Find a place you can keep up and go on steadily.” This good voice, the Life voice, braced and resolved me. I stopped trembling and did not falter again. pp. 60-61

Sacks gives some other anecdotal examples of people under extreme duress:

Joe Simpson, climbing in the Andes, also had a catastrophic accident, falling off an ice ledge and ending up in a deep crevasse with a broken leg. He struggled to survive, as he recounted in Touching the Void–and a voice was crucial in encouraging and directing him:

“There was silence, and snow, and a clear sky empty of life, and me, sitting there, taking it all in, accepting what I must try to achieve. There were no dark forces acting against me. A voice in my head told me that this was true, cutting through the jumble in my mind with its coldly rational sound.”

“It was as if there were two minds within me arguing the toss. The *voice* was clean and sharp and commanding. It was always right, and I listened to it when it spoke and acted on its decisions. The other mind rambled out a disconnected series of images, and memories and hopes, which I attended to in a daydream state as I set about obeying the orders of the *voice*. I had to get to the glacier….The *voice* told me exactly how to go about it, and I obeyed while my other mind jumped abstractly from one idea to another…The *voice*, and the watch, urged me into motion whenever the heat from the glacier halted me in a drowsy exhausted daze. It was three o’clock—only three and a half hours of daylight left. I kept moving but soon realized that I was making ponderously slow headway. It didn’t seem to concern me that I was moving like a snail. So long as I obeyed the *voice*, then I would be all right.”

Such voices may occur with anyone in situations of extreme threat or danger. Freud heard voices on two such occasions, as he mentioned in his book On Aphasia:

“I remember having twice been in danger of my life, and each time the awareness of the danger occurred to me quite suddenly. On both occasions I felt “this was the end,” and while otherwise my inner language proceeded with only indistinct sound images and slight lip movements, in these situations of danger I heard the words as if somebody was shouting them into my ear, and at the same time I saw them as if they were printed on a piece of paper floating in the air.

The fact that the gods tend to come to mortals in the Iliad during times of stress has been noted by author Judith Weissman, author of “Of two minds: Poets who hear voices”:

Judith Weissman, a professor of English at Syracuse University, notes that in the Iliad the gods speak directly to the characters over 30 times, often when the characters are under stress. Many of the communications are short, brief, exhortations. The most common godly command, issued when the men are fearful in battle, is to, “fight as your father did.” At one point in the Iliad, the god Apollo picks up Hektor, who has fallen in battle, and says, “So come now, and urge on your cavalry in their numbers / to drive on their horses against the hollow ships” (15.258-59)…

Personality Before the Axial Age (Psychology Today)

Hallucinations are also quite common in soldiers suffering from PTSD. If modern soldiers experience PTSD, how much more traumatic would be ancient battles, like those described so vividly in the Iliad? I can’t even imagine standing face-to-face with a foe, close enough to feel his hot breath, and having to shove a long, sharp metal object directly into his flesh without hesitation; blood gushing everywhere and viscera sliding out of his belly onto the dirt. And yet this was the reality of ancient warfare in the Bronze and Iron ages.  Not to mention the various plagues, dislocations, natural disasters, invasions, and other assorted traumatic events.

People with PTSD are also prone to recurrent dreams or nightmares, often incorporating literal or somewhat disguised repetitions of the traumatic experiences. Paul Chodoff, a psychiatrist writing in 1963 about the effects of trauma in concentration camp survivors, saw such dreams as a hallmark of the syndrome and note that in a surprising number of cases, they were still occurring a decade and half after the war. The same is true of flashbacks. p. 239

Veterans with PTSD may hallucinate the voices of dying comrades, enemy soldiers, or civilians. Holmes and Tinnin, in one study, found that the hearing of intrusive voices, explicitly or implicitly accusing, affected more than 65 percent of veterans with combat PTSD. p. 237 note 4

The other very common occurrence where otherwise “sane” people will often hallucinate sounds or images is during grief and bereavement. Sometimes this is just hearing the voice of the departed person speaking to them or calling them. Sometimes they may actually see the person. And sometimes they may even carry on extended conversations with their deceased family members!

Bereavement hallucinations, deeply tied to emotional needs and feelings, tend to be unforgettable, as Elinor S., a sculptor and printmaker, wrote to me:

“When I was fourteen years old, my parents, brother and I were spending the summer at my grandparents’ house as we had done for many previous years. My grandfather had died the winter before.”

“We were in the kitchen, my grandmother was at the sink, my mother was helping and I was still finishing dinner at the kitchen table, facing the back porch door. My grandfather walked in and I was so happy to see him that I got up to meet him. I said ‘Grampa,’ and as I moved towards him, he suddenly wasn’t there. My grandmother was visibly upset, and I thought she might have been angry with me because of her expression. I said to my mother that I had really seen him clearly, and she said that I had seen him because I wanted to. I hadn’t been consciously thinking of him and still do not understand how I could have seen him so clearly. I am now seventy-six years of age and still remember the incident and have never experienced anything similar.”

Elizabeth J. wrote to me about a grief hallucination experienced by her young son:

“My husband died thirty years ago after a long illness. My son was nine years old at the time; he and his dad ran together on a regular basis. A few months after my husband’s death, my son came to me and said that he sometimes saw his father running past our home in his yellow running shorts (his usual running attire). At the time, we were in family grief counselling, and when I described my son’s experience, the counsellor did attribute the hallucinations to a neurologic response to the grief. This was comforting to us, and I still have the yellow running shorts.” pp. 233-234

It turns out that this kind of thing is extremely common:

A general practitioner in Wales, W.D. Rees, interviewed nearly three hundred recently bereft people and found that almost half of them had illusions or full-fledged hallucinations of a dead spouse. These could be visual, auditory, or both—some of the people interviewed enjoyed conversations with their hallucinated spouses. The likelihood of such hallucinations increased with the length of the marriage, and they might persist for months or even years. Rees considered these hallucinations to be normal and even helpful in the mourning process. p. 234

A group of Italian psychological researchers published a paper in 2014 entitled “Post-bereavement hallucinatory experiences: A critical overview of population and clinical studies.” According to their paper, after an extensive review of peer-reviewed literature, they found that anywhere from 30 to 60 percent of grieving people experienced what they called “Post-bereavement hallucinatory experiences” (PBHEs). Is it any wonder why veneration of the dead was so common across cultures from the Old World to Africa to Asia to the Americas to Polynesia? It was almost universally assumed that the dead still existed in some way across ancient cultures. Some scholars such as Herbert Spencer posited that ancestor worship was the origin of all religious rites and practices.

What is the fundamental cause of all these aural hallucinations? As neurologist Sacks freely admits, the source of these phenomena is at present unknown and understudied. Sacks references Jaynes’s “Origin of Consciousness…” in his speculation on possible explanations:

Auditory hallucinations may be associated with abnormal activation of the primary auditory cortex; this is a subject which needs much more investigation not only in those with psychosis but in the population at large–the vast majority of studies so far have examined only auditory hallucinations in psychiatric patients.

Some researchers have proposed that auditory hallucinations result from a failure to recognize internally generated speech as one’s own (or perhaps it stems from a cross-activation with the auditory areas so that what most of us experience as our own thoughts becomes “voiced”).

Perhaps there is some sort of psychological barrier or inhibition that normally prevents most of us from “hearing” such inner voices as external. Perhaps that barrier is somehow breached or underdeveloped in those who do hear constant voices. Perhaps, however, one should invert the question–and ask why most of us do not hear voices.

In his influential 1976 book, The Origin of Consciousness in the Breakdown of the Bicameral Mind, speculated that, not so long ago, all humans heard voices–generated internally from the right hemisphere of the brain, but perceived (by the left hemisphere) as if external, and taken as direct communications from the gods. Sometime around 1000 B.C., Jaynes proposed, with the rise of modern consciousness, the voices became internalized and recognized as our own…Jaynes thought that there might be a reversion to “bicamerality” in schizophrenia and some other conditions. Some psychiatrists (such as Nasrallah, 1985) favor this idea or, at the least, the idea that the hallucinatory voices in schizophrenia emanate from the right side of the brain but are not recognized as one’s own, and are thus perceived as alien…It is clear that “hearing voices” and “auditory hallucinations” are terms that cover a variety of different phenomena. pp. 63-64

Recently, neuroscientists have hypothesized the existence of something called an “efference copy” which is made by the brain of certain types of stimulus. The presence of the effluence copy informs the brain that certain actions have originated from itself, and that subsequent inputs are self-generated. For example, the efference copy of your hand movements is what prevents you from tickling yourself. The existence of this efferance copy—or rather the lack thereof—has been postulated as the reason why schizophrenics can’t understand the voices in their heads as being their own. A temporary suppression of the efference copy may be behind why so many otherwise “sane” people often hear voices as something coming from outside their own mind.

Efference copy is a neurological phenomenon first proposed in the early 19th century in which efferent signals from the motor cortex are copied as they exit the brain and are rerouted to other areas in the sensory cortices. While originally proposed to explain the perception of stability in visual information despite constant eye movement, efference copy is now seen as essential in explaining a variety of experiences, from differentiating between exafferent and reafferant stimuli (stimulation from the environment or resulting from one’s own movements respectively) to attenuating or filtering sensation resulting from willed movement to cognitive deficits in schizophrenic patients to one’s inability to tickle one’s self.

Efference Copy – Did I Do That? Cody Buntain, University of Maryland (PDF)

I talk to myself all the time. The words I’m typing in this blog post are coming from some kind of “inner self.” But I feel like that inner voice and “me” are exactly the same, as I’m guessing you do too, and so do most of us “normal” people. But is that something inherent in the brain’s bioarchitecture, or is that something we are taught through decades of schooling and absorbing our cultural context? Might writing and education play an important role in the “breakdown” of bicameralism? We’ll take a look at that next time.

The version of bicameralism that seems most plausible to me is the one where the change is a memetic rather than evolutionary-genetic event. If it were genetic, there would still be too many populations that don’t have it, but whose members when plucked out of the wilderness and sent to university seem to think and feel and perceive the world the way the rest of us do.

But integrated, introspective consciousness could be somewhere between language and arithmetic on the list of things that the h. sapien brain has always been capable of but won’t actually do if you just have someone raised by wolves or whatnot. Language, people figure out as soon as they start living in tribes. Arithmetic comes rather later than that. If Jaynes is right, unicameral consciousness is something people figure out when they have to navigate an environment as complex as a bronze-age city, and once they have the knack they teach their kids without even thinking about it. Or other peoples’ kids, if they are e.g. missionaries.

At which point brains whose wiring is better suited to the new paradigm will have an evolutionary advantage, and there will be a genetic shift, but as a slow lagging indicator rather than a cause.

John Schilling – Comment (Slate Star Codex)

[I’m just going to drop this here—much of the cause of depression stems from negative self-talk (“It’s hopeless!;” Things will never get better;” “I’m worthless,”etc.). In such cases, this “inner voice,” rather than being encouraging, is a merciless hector to the depressed individual. As psychologists often point out to their patients, we would never talk to anyone else as callously we talk to ourselves. Why is that? And it seems interesting that there are no references to depression in bicameral civilizations as far as I know. Ancient literature is remarkably free of “despair suicides” (as opposed to suicides for other reasons such as defeat in battle or humiliation).]

The Cathedral of the Mind

What if God was one of us?

What we talk about when we talk about consciousness

Nothing here but spilled chocolate milk.

There are what I call “hard” and “soft” interpretations of Jaynes’s thesis. The “hard” interpretation is exactly what is posited in the book: humans did not have reflexive self-awareness in the way we describe it today until roughly the Bronze Age.

The “soft” interpretation is that a shift in consciousness occurred, quite possibly in the way that Jaynes described it, but that it occurred around 40-70,000 years ago during the Ice Age, long before writing or complex civilizations, when our ancestors will still hunter-gatherers. Another “soft” interpretation is that our ancestors definitely thought differently than we do, but  they were still conscious agents nonetheless, and that the gods and spirits they referred to so often and who seemed to control their lives were merely figments of their imagination.

The Great Leap Forward

The idea that humans experienced some some sort of significant cognitive transformation sometime after becoming anatomically modern is no longer controversial. This is the standard view in archaeology. Scientists call this the transition from anatomically modern humans to behaviorally modern humans. This article has a good summary:

… During the Upper Paleolithic (45,000-12,000 years ago), Homo sapiens fossils first appear in Europe together with complex stone tool technology, carved bone tools, complex projectile weapons, advanced techniques for using fire, cave art, beads and other personal adornments. Similar behaviors are either universal or very nearly so among recent humans, and thus, archaeologists cite evidence for these behaviors as proof of human behavioral modernity.

Yet, the oldest Homo sapiens fossils occur between 100,000-200,000 years ago in Africa and southern Asia and in contexts lacking clear and consistent evidence for such behavioral modernity. For decades anthropologists contrasted these earlier “archaic” African and Asian humans with their “behaviorally-modern” Upper Paleolithic counterparts, explaining the differences between them in terms of a single “Human Revolution” that fundamentally changed human biology and behavior.

Archaeologists disagree about the causes, timing, pace, and characteristics of this revolution, but there is a consensus that the behavior of the earliest Homo sapiens was significantly different than that of more-recent “modern” humans.

Earliest humans not so different from us, research suggests (Science Daily)

What no one knows, however, is what caused it, how it took place, or exactly when and where it took place. But the idea that there could be some kind of drastic cognitive shift without significant physical changes is no longer fringe. As Jared Diamond wrote:

Obviously, some momentous change took place in our ancestors’ capabilities between about 100,000 and 50,000 years ago. That Great Leap Forward poses two major unresolved questions, regarding its triggering cause and it geographic location. As for its cause, I argued in my book The Third Chimpanzee for the perfection of the voice box and hence the anatomical basis for modern language, on which the exercise of human creativity is so dependent. Others have suggested instead that a change in brain organization around that time, without a change in brain size, made modern language possible. Jared Diamond; Guns, Germs and Steel, p.40

Archaeologists tend look at all the things in the archaeological record that indicate that Paleolithic humans were like us (e.g. complex tools, art, body ornamentation, trade, burial of the dead, food storage and preservation), but for some reason they downplay or dismiss all the things that show that, in many ways, they were quite different than us. That is, in some respects, they were not nearly as “behaviorally modern” as we tend to assume.

For example, here are some other things they did during this time period: carve ivory and wooden idols. Make sacrifices to their gods (including mass child sacrifice). Cannibalism. Sleep temples. Build strange statues with eyes and no mouth (eye idols). Practiced astrology. And they regularly poked holes in their skulls for reasons we are still unsure of. In other words, for all the evidence that they thought like us, there is other evidence that suggests that their thinking was substantially different that ours in many ways! But we tend to emphasize the former only, and ignore the latter. This leads to Jaynes’s idea that there may have been more than just one Great Leap Forward, and that human consciousness has changed significantly since the establishment of architectural civilizations.

Let’s take a quick detour into how scientists think the human brain may have developed to gain some insight into whether there may be evidence for bicameralism.

A short digression into brain architecture

The idea that the brain is composed of previous adaptations which have been extended is fairly well accepted. The Triune Brain hypothesis is that we have a “lizard brain”  which controls our base functions like breathing and so forth, and is highly aggressive and territorial. Then we have a rodent (paleomammalian) brain that allow us do more complex social functions such as solve basic problems. Then we have the primate (neomammalian) brain including the neocortex that allows for larger groups and advanced reasoning. This is basically seen as correct in broad strokes, although a vast oversimplification of the complexities of how the primate brain developed

From Primitive Parts, A Highly Evolved Human Brain (NPR)

The brain of an organism cannot just “go down” for maintenance while it upgrades. It has to keep the organism alive and reproducing. So new modules have to be added on the fly to what’s already there ad hoc. This leads to a brain of aggregations which have to mix with older features, much the way legacy computer code is embedded within older software. This, as you can imagine, can lead to “buggy code.”

Archaeologist Steven Mithen wrote a book about the prehistory of the mind—what we might call “cognitive archaeology.” He notes that certain processes seem to come automatically to the brain—like learning language, while others—like multiplying two large numbers together in one’s head, do not. This means that the brains is not like, say, a “general purpose” I/O microcomputer as it’s often described. He writes: “The mind doesn’t just accumulate information and regurgitate it. And nor is it indiscriminate in the knowledge it soaks up. My children—like all children—have soaked up thousands of words effortlessly, but their suction seems to lose its power when it comes to multiplication tables.”

This indicates that the human mind has some inherent, or built-in, propensities, alongside the general intelligence all animals have. That means they may be of evolutionary origin. Spoken language appears to be one of these. While we send our kids to school for years to try and pound algebra, trigonometry and the correct spelling of words into them, children soak up language from their environment shortly after birth with hardly any effort at all.

Noam Chomsky invoked something he called the “poverty of the stimulus” to make this point. He meant that given how fast and accurately children learn language by osmosis, there is no way it comes from “just” environmental inputs, like a computer. Children must be, in some sense, be pre-programmed to learn language, and thus language’s fundamental construction must be related to how the brain functions—something he called a “universal grammar.” Over time, more of these apparently “inherent” behaviors have been identified in humans:

It became increasingly unpopular to assume that a basic understanding of the world can be built entirely from experience. This was in part instigated by theorist Noam Chomsky, who argued that something as complex as the rules of grammar cannot be picked up from exposure to speech, but is supplied by an innate “language faculty.”

Others followed suit and defined further “core areas” in which knowledge allegedly cannot be pieced together from experience but must be innate. One such area is our knowledge of others’ minds. Some even argue that a basic knowledge of others’ minds is not only possessed by human infants, but must be evolutionarily old and hence shared by our nearest living relatives, the great apes.

Children understand far more about other minds than long believed (The Conversation)

This means that, rather than being like a computer or a sponge, argues Mithen, the mind is more akin to a “Swiss army knife,” with different modules for different uses, but all fundamentally a part of the same basic “object.” One study, for example, has found that the ability recognize faces is innate. This explains the human penchant for pareidolia.

You (probably) see a face in this chair, but do you ever see a chair in someone’s face?

Using the Swiss Army knife metaphor, Mithen argues that these various specialized cognitive modules overlap with what he calls “general intelligence.” This overlap between specialized intelligences and the general intelligence leads to a lot of unique features of human cognition such as creativity, socialization, and, perhaps, constructing things like ‘gods’ and the ‘self.’ Here’s a good summary:

Mithen…[argues]…that the mind should … be seen as a series of specialized “cognitive domains” or “intelligences,” each of which is dedicated to some specific type of behavior, such as specialized modules for acquiring language, or tool-using abilities, or engaging in social interaction…his argument will be that that modern human mind has an architecture built up by millions of years of evolution, which finally yielded a mind that creates, thinks, and imagines.

Mithen…highlights recent efforts in psychology to move beyond thinking of the mind as running a general-purpose program, or as a sponge indiscriminately soaking up whatever information is around. A new analogy for the human mind has taken its place: the Swiss army knife, a tool with specialized devices, designed for coping with very special types of problems.

This is found especially in Howard Gardener’s important book Frames of Mind: The Theory of Multiple Intelligences. In this well-known work we are presented with a Swiss-army knife architectural model for the mind, with each “blade,” or cognitive domain, described as a specialized intelligence. Gardener initially identified seven intelligences: linguistic, musical, logical-mathematical, spatial, bodily-kinesthetic, and two forms of personal intelligence (one for looking at on’es own mind, one for looking outward toward others).

Alone in the World? by Wentzel Van Huyssteen, pp. 194-195

Form this, Mithen proposes a new metaphor – that of a cathedral, with a central nave standing in for generalized intelligence, and numerous walled-off enclaves (side chapels) for the various specialized cognitive functions. In a nutshell, Mithen argues that the “walls” between these areas began to break down over time, and the services in the side chapels increasingly blended together with the “main service” taking place in the nave. The mixture gives rise to rise to the various symbolic and metaphorical aspects of human consciousness—what he terms “cognitive fluidity.”

Mithen fills out the three stages in the historical development of the human mind as follows:

In Phase One human minds were dominated by central “nave” of generalized intelligence.

Phase Two adds multiple “chapels” of specialized intelligences, including the cognitive domains of language, social intelligence, technical intelligence, and natural history intelligence.

Phase Three brings us to the modern mind in which the “chapels” or cognitive domains have been connected, resulting in what Mithen calls cognitive fluidity. This creative combination of the various cognitive domains of the mind would ultimately have profound consequences for the nature of the human mind. With this cognitive fluidity, the mind acquired not only the ability for, but also a positive passion for, metaphor and analogy. And with thoughts originating in different domains engaging one another, the result is an almost limitless capacity for imagination.

It is exactly this amazing ability that would make our species so different from early humans who shared the same basic mind – a Swiss army knife of multiple intelligences, but with very little interaction between them.

Mithen’s useful model here, again, is a cathedral with several isolated chapels, within which unique services of thought were undertaken, each barely audible elsewhere in the cathedral. In Mithen’s words: “Early humans seem to have been so much like us in some respects, because they had these socialized cognitive domains; but they seem so different because they lacked the vital ingredient of the modern mind: cognitive fluidity”

[Behavioral modernity] is when “doors and windows were inserted between chapel walls”, when thoughts and information began flowing freely among the diverse cognitive domains or intelligences. Specialized intelligences no longer had to work in isolation, but a a”mapping across knowledge systems” now became possible, and from this “transformation of conceptual spaces” creativity could now arise as never before.

Mithen thus appropriates some of the work of cognitive psychologists, to make the related point that in both development and evolution the human mind undergoes (or has undergone) a transformation from being constituted by a series of relatively independent cognitive domains to a situation in which ideas, ways of thinking, and knowledge now flow freely between such domains. This forms the basis for the highly plausible hypothesis that during this amazing emergent period of transition, the human brain was finally hardwired for cognitive fluidity, yielding imagination and creativity.

Alone in the World? by Wentzel Van Huyssteen pp. 195-197

And modern scientific investigation tends to back these ideas up:

The ability to switch between networks is a vital aspect of creativity. For instance, focusing on a creative puzzle with all of your attention might recruit the skills of the executive attention network. On the other hand, if the creative task involves producing a sonically pleasing guitar solo, focus might be switched from intense concentration to areas more involved in emotional content and auditory processing.

The neuroscience of creativity (Medical News Today)

It is this mixing of intelligences – this cognitive fluidity, that gives rise to language and symbolic thinking. Incremental at first, the increasing blending of these intelligences gives rise to language and symbolic thought over time. This leads to the “Great Leap Forward” seen in the archaeological record:

Of critical importance here is also a marked change in the nature of consciousness. Mithen has argued that reflexive consciousness evolved as a critical feature of social intelligence, as it enabled our ancestors to predict the behavior of other individuals. He then makes the point that there is now reason to expect early humans to have had an awareness about their own knowledge and thought processes concerning the nonsocial world. Via the mechanism of language, however, social intelligence began to be invaded by nonsocial information, and the nonsocial world becomes available for reflexive consciousness to explore…Consciousness then adopted the role of a comprehensive, integrating mechanism for knowledge that had previously been “trapped” in specialized intelligences.

The first step toward cognitive fluidity appears to have been integration between social and natural history intelligence in early modern humans around 100,000 years ago. The final step to full cognitive fluidity, the potential to entertain ideas that bring together elements from normally incongruous domains, occurred at different times in different populations between 60,000 and 30,000 years ago. This involved an integration of technical intelligence, and led to the cultural explosion we are now calling the appearance of the human mind.

…As soon as language started acting as a vehicle for delivering information into the mind, carrying with it snippets of nonsocial information…[it] now switched from a social to a general-purpose function, consciousness from a means to predict other individuals’ behavior to managing a mental database of information relating to all domains of behavior…Mithen’s most interesting point here is that some metaphors and analogies can be developed by drawing on knowledge within a single domain, but the most powerful ones are those that cross domain boundaries. By definition these kinds of metaphors can arise only within a cognitively fluid mind… Alone in the World, pp.197-199

Yes, but were they conscious? There’s the rub. Is artwork proof of reflective self-consciousness? Are burials proof of such? Clearly tool use alone is not, as we’ve seen. And some of the most vibrant artwork has been done by schizophrenics.

Like Mithen, Jaynes also calls attention to the vital role of language and metaphor in cognitive fluidity and reflective self-consciousness. Even ‘the self’ itself is a metaphor!

…The most fascinating property of language is its capacity to make metaphors … metaphor is not a mere extra trick of language…it is the very constitutive ground of language. I am using using metaphor here in its most general sense: the use of a term for one thing to describe another because of some kind of similarity between them of between their relations to other things.

There are thus always two terms in a metaphor, the thing to be described, which I shall call the metaphrand, and the thing or relation used to elucidate it, which I shall call the metaphier. A metaphor is always a known metaphier operating on a less known metaphrand.

It is by metaphor that language grows. The common reply to the question “what is it?” is, when the reply is difficult, or the experience unique, “well, it is like –.” In laboratory studies, both children and adults describing nonsense objects (or metaphrands) to others who cannot see them use extended metaphriers that with repetition become contracted onto labels. This is the major way in which the vocabulary of language is formed. The grand and vigorous function of metaphor is the generation of new language as it is needed, as human culture becomes more and more complex.

It is not always obvious that metaphor has played this all-important function. But this is because the concrete metaphiers become hidden in phonemic change, leaving the words to exist on their own. Even such an unmetaphorical-sounding word as the verb ‘to be’ was generated from a metaphor. it comes from the Sanskrit bhu, “to grow, or to make grow,” while the English forms ‘am’ and ‘is’ have evolved from the same root as the Sanskrit asmiy “to breathe.”

It is something of a lovely surprise that the irregular conjugation of our most nondescript verb is thus a record of a time when man had no independent word for ‘existence’ and could only say that something ‘grows” or that it ‘breathes.’ Of course we are not conscious that the concept of being is thus generated from a metaphor about growing and breathing. Abstract words are ancient coins whose concrete images in the busy give-and-take of talk have worn away with use. pp. 48-51

The ancient Greeks at the time of Homer lacked a word for blue; they referred to the Mediterranean Sea, for example, as “wine-colored” (οἶνοψ). The brilliant hues of the Mediterranean sunrise are famously described as “rosy-fingered” (ῥοδοδάκτυλος), and so forth. Wikipedia even has a list of them. A similar concept in Old Norse is called kenning (e.g blood = “battle sweat”).

In reading ancient texts, it is one of the rare opportunities we have to look upon a worldview entirely alien to us. The ancients described physical appearances in some ways that seem bizarre to the modern sensibility. Homer says the sea appears something like wine and so do sheep. Or else the sea is violet, just as are oxen and iron. Even more strangely, green is the color of honey and the color human faces turn under emotional distress. Yet no where in the ancient world is anything blue for no word for it existed. Things that seem blue to us are either green, black or simply dark in ancient texts.

Also, things like subjective perspective and experience are lacking. Even body parts are regularly described as having their own minds. And voices are supposedly heard in the external world, command voices telling people what to do, while voices aren’t described as being heard within the head. There is no ancient word or description of a fully internalized sense of self.

It’s hard to know what to make of all this. There are various theories that attempt to explain it. But the main takeaway is that our common sense assumptions are false. There is something more to human nature and human society than we at present experience and understand. As a species, we are barely getting to know ourselves.

Benjamin David Steele (Facebook post)

Note that in our discussion above, even our descriptions of the mind rely upon metaphors (“Swiss army knife,” “cathedral”) and spatialization (“leaping forward”).

Finally, there was a German theorist of religion named Max Müller who saw the origin of what we call ‘gods’ in the way that humans naturally tend to conceive of things they do not understand metaphorically. His theories have all but been forgotten, but I think they fit nicely with the idea that in order to comprehend certain natural phenomena, ancient peoples resorted to assigning them the category ‘god,’ even when they knew, for instance, that the sun was not literally the chariot of Apollo, or that lightning bolts were not literally thrown by Zeus. Keep in mind, what we think of when we hear the word ‘god’ in our rationalist, materialistic, monotheistic-influenced culture is probably so different than what the ancient people using it at the time meant, that we moderns cannot even conceive of what they had in mind. Here’s E. E. Evans-Pritchard describing Muller’s theories:

In [Müller’s] view, as I understand it, men have always had an intuition of the divine, the idea of the infinite–his word for God–deriving from sensory experience…Now, things which are intangible, like the sun and the sky, gave men the idea of the infinite and also furnished the material for deities…Müller did not wish to be understood as suggesting that religion began by men deifying natural objects, but rather that these gave him a feeling of the infinite and also served as symbols for it.

Müller was chiefly interested in the gods of India and of the classical world…His thesis was that the infinite, once the idea had arisen, could only be thought of in terms of metaphor and symbol, which could only be taken from what seemed majestic in the known world, such as the heavenly bodies, or rather their attributes. But these attributes then lost their original metaphorical sense and achieved autonomy by becoming personified as deities in their own right. The nomina became numina.

So religions, of this sort at any rate,might be described as a ‘disease of language’, a pithy but unfortunate expression which later Muller tried to explain away but never quite lived down. It follows, he held, that the only way we can discover the meaning of the religion of early man is by philological and etymological research, which restores to the names of the gods and the stories told about them their original sense.

Thus, Apollo loved Daphne; Daphne fled before him and was changed into a laurel tree. This legend makes no sense till we know that originally Apollo was a solar deity, and Daphne, the Greek name for the laurel, or rather the bay tree, was the name for the dawn. This tells us the original meaning of the myth: the sun chasing away the dawn.

E.E. Evans-Pritchard – Theories Of Primitive Religion, pp. 21-22

What We Talk About When We Talk About Consciousness

Previously: What If God Was One Of Us?

Last time we discussed the radical the idea that “consciousness” arose relatively late in human history, roughly around the time of the Late Bronze Age Collapse in the Mediterranean.

Now, its important to understand that when Jaynes uses the term “consciousness, he is talking about something very specific. It’s not simply being responsive to one’s exterior surroundings (sense perception), but being aware of them and filtering them though a some kind of “inner life”. Jaynes contends that this sort of meta-awareness arrived relatively late in human history, and that we can pinpoint this change in comprehension through a careful reading of ancient literature, especially sacred literature and epic poetry.

Think of it this way: you see an apple; the color waves hit your eyes, which send signals to your brain via the optic nerve. You “choose” to reach out and grasp it. A nerve signal goes out from the brain to your arm and hand. The apple is touched. Nerve fibers in the hand sends signals from the hand to the brain, describing the temperature, texture, firmness, and so forth. All of these signals are processed various areas of the brain which we can see by the neurons firing in those areas in an fMRI scan.

Jaynes isn’t talking about any of that stuff. That’s the process of sense perception. He’s talking about something else entirely. As Marcel Kuijsten of the Julian Jaynes society describes:

[2:30 -3:57] “In a nutshell, what Jaynes argues is that, as humans evolved language, along with language the brain was using language to then convey experience between the two hemispheres which were operating in a, let’s say, a less integrated fashion then they are today.”

This idea is a little shocking to people initially, because behavior was then directed by what we today call an auditory hallucination. But there’s a lot of evidence that he presents for this. The ancient literature is filled with all of these examples of people’s behavior being directed by what they interpreted as the gods, idols that they used to illicit these commands, and just quite a bit of evidence that he gets into explaining all this.”

“From that he realized that consciousness was not what people generally assume to be a biologically innate, evolved process, but it was something that was learned, and it was based on language. So after language got to a level of complexity, then we developed this ability to introspect. So he places the date for the development of consciousness much more recently than traditional ideas.

“[10:18] Most of the critiques of the theory are based on misconceptions…[11:04] The most common mistake is that they are criticizing what Jaynes is saying based on their own view of consciousness rather than how Jaynes defines it. And consciousness is defined so differently by so many people that when you go to conferences on consciousness you see all these people giving lectures and they’re all really defining it in very, very different ways.”

Julian Jaynes and the Bicameral Mind Theory (This View of Life Magazine)

Jaynes himself acknowledges the inherent difficulty of using our own conscious mind to come to an intellectual reckoning of, well, itself!

Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of. How simple is that to say; how difficult to appreciate! It is like asking a flashlight in a dark room to search around for something that does not have any light shining upon it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so consciousness can seem to pervade a mentality when actually it does not. p. 23

Again, consciousness is not simply the sense perception of the world around you. It’s not required to do basic things like eat, sleep or have sex. It’s not even necessary for talking. Chimpanzees (and gorillas) have been taught to “talk” using sign language. Unless we attribute reflective self-consciousness to great apes, then clearly language—in terms of expressing simple desires and notions about the world using nouns and verbs—is not, strictly speaking, only an act that only conscious beings can do; at least how Jaynes is describing it. All animals communicate in some fashion, whether they are self-conscious or not.

Also, it’s thought that language actually evolved in humans primarily for gossip, and that gossip evolved as a method of social bonding and coalition building, and not, please note, for ruminative thought or reflective self-awareness:

Human language didn’t evolve to name things but for gossip — our equivalent of primates grooming — which developed to maintain the bonds of trust in the ever growing social groups our ancestors formed to protect themselves against predators as they moved ‘out of Africa’ to survive…We continue to gossip today — approximately 65% of modern talking time is taken up by it, irrespective of age, gender or culture. The topics tend to be extreme events (both good and bad) that we struggle to make sense of alone. By engaging our peers we are better able to understand and act in the world around us.

The Problematic Storytelling Ape (Medium)

Nor is consciousness strictly necessary for a large-scale social organization to develop. For example, there are many examples of eusocial and prosocial species among earth’s various types of animals. Ants, bees, and wasps are among the most successful animal species on the planet, engaging in agriculture, building large nests, raising each other’s young, engaging in organized war, and living in vast “cities.” Are the hymnoptera conscious in the same way humans are? It’s highly doubtful. And yet they live in complex societies and many of their behaviors are similar.

“I’ll take the example of the leaf cutter ant,” [economics professor Lisi] Krall explained … “They cut and harvest leaves, and then they feed the leaves to their fungal gardens, and they themselves then feed on the fungal gardens,” she said. The ants “develop into vast, vast colonies that have highly developed, profound divisions of labor.” Sound familiar?…”We engaged a kind of social evolution, that started with agriculture, that put us on a path of expansion and interconnectedness and ultimately, in humans, hierarchy, and all that kind of stuff,” she said.

Humans are more like ants than lone wolves (Treehugger)

Even writing existed for thousands of years as simply a mnemonic device for recording straightforward things like genealogies and inventories—”lists, lists and more lists,” as James C. Scott put it. There’s no indication that writing, strictly speaking, requires self-consciousness.

Agriculture, villages, towns, even cities and empires arose without the benefit of writing. The earliest forms of cuneiform writing consisted of clay tablets recording market transactions and tax records with [no] moral, political or legal lessons for future generations… These were mnemonic devices, no better and no worse than a string tied around the finger or rather more sophisticated sets of knots created by the Incans [sic]. The tablets circulated as bills of exchange, carrying a symbolic value as money rather than a historical value as something-to-be-preserved. Their symbolic function served, the tablets were simply thrown away in the trash. Daniel Lord Smail, On Deep History and the Brain p. 57

Animals have also constructed dwellings like hives, mounds, and nests, and made artwork: “Animal-made works of art have been created by apes, elephants, cetacea, reptiles, and bowerbirds, among other species.” (Wikipedia)

Chimpanzee wins $10,000 prize for abstract painting (The Guardian)

It used to be thought that reflexive self-consciousness was necessary for any sort of complex culture to exist, and that cumulative cultural evolution was something unique to humans. However, in 2014 researchers managed to induce cumulative cultural evolution in baboons. In 2017, it was found that homing pigeons can also gather, pass on and improve knowledge over generations. Then, whales and dolphins (cetaceans) were added to the mix. Then came migrating ungulates (hoofed mammals). Last year, researchers even detected evidence of it among fruit flies!

Primatologists have taken to regularly attributing the differences in chimpanzee behavior in various troops across Africa to “culture” rather than biological instinct. And tool use has been documented in a wide number of animals:

The suggestion that humanity is distinct by virtue of possessing a culture subject to Lamarckian evolution is more problematic than it may appear. The glitch lies in the fact that humans are no longer considered to be the only species to possess culture.

The idea that other animals have culture has been circulating for nearly three decades and has reached a point of media saturation that partially obscures the challenge created by the fact of animal culture. Although early studies focused on the apes and monkeys who make tools and wash sweet potatoes, culture does not end with primates.

Birds’ songs and migration routes are learned and transmitted culturally rather than genetically. Some groups of dolphins manipulate sponges to protect their noses while foraging and teach the practice to their offspring. The crows of New Caledonia clip twigs to create hooked tools that are used to retrieve insects from crevices. As with chimpanzees, the types of tools used by crows vary from one group to the next, suggesting that the very use of tools is transmitted through culture. Daniel Lord Smail: On Deep History and the Brain; p. 87

So, more and more, we are finding that self-reflective consciousness is not strictly necessary for many of the behaviors we used to think were uniquely human. Cumulative cultural evolution was there all along just waiting for us to find it! To a drastically lesser degree than us, of course, but it was there nevertheless. We were just too arrogant and self-absorbed to look properly.

So, then, what exactly do we mean when we talk about consciousness? Unless we consider baboons, chimps, orangutans, dolphins, whales, pigeons, crows, bighorn sheep, ants, termites and fruit flies as all conscious the way we are, we must look elsewhere, or else redefine what it is that we are truly searching for in the first place.

What is does not mean is what’s usually called “operant conditioning.” All animals are capable that. Jaynes himself dismisses ideas of operant conditioning as indicators of the type of consciousness that characterizes human beings. After describing standard experiments in which he “taught” everything from plants to microbes to reptiles to complete various tasks, he realized this had nothing whatsoever to do with the type pf conscious behavior he was looking for:

It was, I fear, several years before I realized that this assumption makes no sense at all. When we introspect, it is not upon any bundle of learning processes, and particularly not the types of learning denoted by conditioning and T-mazes. Why then did so many worthies in the lists of science equate consciousness and learning? And why had I been so lame of mind as to follow them?…

It is this confusion that lingered unseen behind my first struggles with the problem, as well as the huge emphasis on animal learning in the first half of the twentieth century. But it is now absolutely clear that in evolution the origin of learning and the origin of consciousness are two utterly separate problems…

Is consciousness…this enormous influence of ideas, principles, beliefs over our lives and actions, really derivable from animal behavior? Alone of species, all alone! We try to understand ourselves and the world. We become rebels or patriots or martyrs on the basis of ideas. We build Chartres and computers, write poems and tensor equations, play chess and quartets, sail ships to other plants and listen to other galaxies – what have these to do with rats in mazes or the threat displays of baboons? The continuity hypothesis of Darwin for the evolution of the mind is a very suspicious totem of evolutionary mythology… pp. 7-8

The chasm is awesome. The emotional lives of men and other mammals are indeed marvelously similar, but to focus upon the similarity unduly is to forget that such a chasm exists at all. The intellectual life of man, his culture and history and religion and science, is different from anything else we know of in the universe. That is fact. It is as if all life evolved to a certain point, and then in ourselves turned at a right angle and simply exploded in a different direction. p.9

Jaynes controversially rejects the idea that consciousness is necessarily a part of human thinking and reasoning, as we commonly assume it must be. He cites the work of the Würzburg School of psychology in Germany and their discovery of so-called “imageless thoughts.”

The essential point here is that there are several stages of creative thought: first, a stage of preparation in which the problem is consciously worked over then a period of incubation without any conscious concentration upon the problem; and then the illumination which is later justified by logic. The parallel between these important and complex problems and the simple problems of judging weights or the circle-triangle series is obvious. The period of preparation is essentially the setting up of a complex situation together with conscious attention to the materials on which the striction is to work. But then the actual process of reasoning, the dark leap onto huge discovery, just as in the simple trivial judgement of weights, has no representation in consciousness. Indeed, it is sometimes almost as if the problem has to be forgotten to be solved. p. 44

Jaynes points out that not only is consciousness not necessary for performance of routine daily tasks, it can actually be counterproductive! Self-conscious reflection puts us on notice of “watching ourselves” from an observer’s point of view, and thus our performance often degrades. That is, we involve our “ego self” in what we happen to be doing at the moment. You can see this all the time with athletes. Once they start to want to win, they trip up and stop winning. The best sports actions are performed without a certain lack of self-reflection (dare we say, a lack of conscious introspection) leading to a sense of spontaneity. We might almost call it a trance, as in the Taoist tale of the dexterous butcher. There is a word for this “non-conscious” state in Chinese philosophy: wu-wei, or non-action. Ted Slingerland, and expert in ancient Chinese philosophy has written a whole book about this concept called Trying Not to Try.

It’s clearly a different sort of consciousness that Jaynes is after. It is something uniquely human, but we don’t seem to be able to find it anywhere we look, except by degrees of magnitude over various other animals. Even art, culture, building, reasoning and communication are not immune!

Nor does the “self” or “consciousness” have any sort of fixed anatomical location, inside your noggin or anywhere else for that matter, as we seem to assume. Many ancient peoples located their conscious selves in the heart, not in the head. The ancient Greeks did so, seeing the brain as merely a cooling organ for blood, like a car radiator. Out-of-body experiences also testify that consciousness can locate itself anywhere, not even within the physical body itself!

Where does consciousness take place? Everyone, or almost everyone, immediately replies, in my head. This is because when we introspect, we seem to look inward on an inner space somewhere behind our eyes. But what on earth do we mean by ‘look’? We even close our eyes sometimes to introspect even more clearly. Upon what? Its spatial character seems unquestionable…

We not only locate this space of consciousness inside our own heads. We also assume it is there in others’. In talking with a friend, maintaining periodic eye-to-eye contact (that remnant of our primate past where eye-to-eye contact was concerned in establishing tribal hierarchies), we are always assuming a space between our companion’s eyes into which we are talking, similar to the space we imagine inside out own heads where we are talking from.

And this is the very heartbeat of the matter, for we all know perfectly well that there is no such space in anyone’s head at all! There is nothing inside my head or yours except physiological tissue of one sort or another. And the fact that it is predominantly neurological tissue is irrelevant. pp. 44-45

Let us not make a mistake. When I am conscious, I am always and definitely using certain parts of my brain inside my head. But so am I when riding a bicycle, and the bicycle riding does not go on inside my head. The cases are different of course, since bicycle riding has a definite geographical location, while consciousness does not. In reality, consciousness has no location whatever except as we imagine it has. p. 46

In the end Jaynes concludes with regard to consciousness:

We have been brought to the conclusion that consciousness is not what we generally think it is. It is not to be confused with reactivity. It is not involved in hosts of perceptual phenomena. It is not involved in the performance of skills and often hinders their execution. It need not be involved in speaking, writing, listening, or reading. It does not copy down experience, as most people think. Consciousness is not at all involved in signal learning, and need not be involved in the learning of skills or solutions, which can go on without any consciousness whatever. It is not necessarily for making judgements or in simple thinking. It is not the seat of reason, and indeed some of the most difficult instances of creative reasoning go on without any attending consciousness and it has no location except and imaginary one! The immediate question therefore is, does consciousness exist at all? pp. 46-47

Jaynes concludes that it does, but to understand what he means, we have to start thinking about it in a totally different way. And for that reason, we can’t find it simply by studying physical processes in the brain. We need to engage in a bit of existentialist philosophy:

The trick to understanding his model is first understanding what he means by “consciousness”. I don’t think he means what most of us mean when we talk about say the “hard problem” of consciousness. In modern considerations of consciousness, I think we largely refer to subjective experience – the “what it is like” to be aware of the world. Jaynes however dismisses this as mere sensory perception. He is more interested in what it is to have an internal “mindspace”, an “analog I” that experiences the world. Jaynes argues for the emergence of this sense of self and an inner mindspace from language. He sees the infinite capacity for metaphor inherent in human language as a means by which we can build similarly infinite concepts and ideas about our relationship with the external world.

That is, when we introspect upon our experience as selves in the world, we construct an inner self, an “I” that exists within our mind’s eye which is what it is that has these experiences, these relationships. This inner self is an analog for what our senses perceive and how we react and is what gives us a sense of the first person in how we view the world. I guess Jaynes is thinking here of some kind of conscious interiority, a feeling of being “in here” rather than “out there” (or perhaps nowhere at all).

Jaynes observes (as have many others) that this kind of awareness rests upon language. Human language has two distinctive features – the capacities for metaphorical representation and infinite recursion. With these basic tools, human beings can build infinitely complex models of self and experience. We can also use language to communicate – share – these models. In fact, over time it is this sharing that helps to construct commonly held models of experience that shape the course of cultural progress.

Julian Jaynes and the Analog “I” (Science Philosophy Chat Forums)
The key to this is how the brain uses language to construct the self:

It is through language that we construct models of the self and through translation of our intuitions into words and ideas that we learn the limits of this language and the limits of our own particular perspective.

Through language we learn to differentiate between ourselves and others from a young age even if consciousness is not a concept that we ever learn explicitly or ever truly “know” our self.

It is in natural language — the spoken word, novels, poetry, vague metaphorical speech, descriptions of made-up things like love and self and consciousness — that we have our greatest tool to share our subjective experiences. A powerful tool to build a common roadmap to create better selves.

The self may be a fiction but in that case it is all the more vital that we embrace fiction, and by extension natural language, to communicate with each other at an ever deeper level.

The Problematic Storytelling Ape (Medium)

Thus, language is crucial in constructing the “self” i.e. the concept of the individual “I” that we normally all carry around all day inside our heads—the homumculus who has no material existence we feel like is “in there” somewhere. But—it’s important to note—the mere presence of language and writing by itself does not necessarily indicate that such introspective thinking exists. Rather, the self—the analog I—is a “concept” that utilizes our innate capacity for language, but is independent of it:

The analogue-I and analogue-me refer to mental self-relevant images that take a first-person vs. third-person perspective, respectively. Mental self-analogues are essential for goal setting, planning, and rehearsal of behavioral strategies, but they often fuel emotional and interpersonal problems when people react to their analogue selves as if they were real.

The Analogue-I and the Analogue-Me: The Avatars of the Self (Self and Identity)

Behavioral scientists have studied how this self interacts with the world. In fact, behavioral science has confirmed that there is not one, unitary “self” consistent over time, but multiple selves! In fact, these selves are often present at the same time, although separated in space. This mindblowing idea alone should cause us to reject the idea that the self is just a biological process inside our heads and not a mental construct. In a recent study on willpower, the authors of the study propose a conflict between multiple overlapping selves: “Simply put, you in the present is different than you in the future.” (Treehugger)

The second class of models posits multiple coexisting selves. This view holds that decision makers behave as if they were a composite of competing selves with different valuation systems and different priorities.

One “self” craves instant gratification (e.g., “I want to eat a cheeseburger! Yum!”), whereas another “self” is focused on maximizing long-term outcomes (e.g., “I want to eat a salad and be healthy!”). Self-control conflicts are the consequence of a present-oriented valuation system disagreeing with a future-oriented valuation system

…Evidence for multiple system models comes from functional MRI (fMRI) studies showing that self-controlled choices were associated with lateral prefrontal areas of the brain, whereas more impulsive choices were associated with the ventral striatum and ventromedial prefrontal cortex.

Beyond Willpower: Strategies for Reducing Failures of Self-Control (Sage Journals)

Given all this, Jaynes finally lists what he believes are the core characteristics of the kind of human introspective consciousness awareness he’s talking about:

1. Spatialization – We tend to describe reality in terms of spatial visualization. “If I ask you to think of the last hundred years, you may have a tendency to excerpt the matter in such a way that the succession of years is spread out, probably from left to right. But of course there is no left or right in time. There is only before and after, and these do not have any spatial properties whatever – except by analog. You cannot, absolutely cannot think of time except by spatializing it. Consciousness is always a spatialization in which the diachronic is turned into the synchronic, in which what has happened in time is excerpted and seen in side-by-sideness.” p. 60

2. Excerption (screening, or filtering) Our perception of our reality is necessarily limited. “In consciousness, we are never ‘seeing’ anything in its entirely…we excerpt from the collection of possible attentions to a thing which compromises our knowledge of it. And this is all that is possible to do since consciousness is a metaphor of our actual behavior.”

3. The Analog ‘I’“…the metaphor we have of ourselves which can move about vicarially in our imagination doing things we are not actually doing…In the example of…spatialization, it was not your physical behavioral self that was trying to ‘see’ where my theory ‘fits’ into the array of alternative theories. It was your analog ‘I'” pp. 62-63

4. The Metaphor ‘Me’“We can both look out from within the imagined self at the imagined vistas, or we can step back a bit and see ourselves perhaps kneeling down for a drink of water at a particular brook.”

5.Narratization: We construct narratives to understand the world: “In our consciousness we are always seeing our vicarial selves as the main figures in the stories of our lives. In the above illustration, the narratization is obvious, namely, walking along a wooded path. But it is not so obvious that we are constantly doing this whenever we are being conscious, and this I call narratization.”

6. Conciliation: We comprehend new things by fitting them within established patterns. “…a slightly ambiguous perceived object is made to conform to some previously learned schema, an automatic process sometimes called assimilation. We assimilate a new stimulus into our conception, or schema about it, even though it is slightly different…assimilation consciousized is conciliation. In conciliation we are making excerpts or narratizations compatible with each other, just as in external perception the new stimulus and the internal conception are made to agree…”

To this I would also add that the human mind seems to have an inherent instinct for meaning or purpose. It tends to be quite good at self-deception. And, we’ll later explore the human mind’s ability for recursion.

To get some clues about how this all developed, we”ll take a look at some theories of how the modern human brain evolved from earlier hominins next time.

BONUS: Robert Sapolsky: Are Humans Just Another Primate?

What If God Was One of Us?

Did ancient peoples have a fundamentally different consciousness than modern people?

Horus is my co-pilot

It’s a question I think deserves serious attention. Of course, this leads to a discussion of what the heck “consciousness” even means—does it mean self-awareness, or self-conscious introspection, or our perception of consensus reality? What constitutes “reality?” Are dreams and hallucinations “real,” for instance? And what does “self-awareness” really mean, anyway? Solipsism? Or something else?

Even in a secular age, consciousness retains a mystical sheen. It is alternatively described as the last frontier of science, and as a kind of immaterial magic beyond science’s reckoning. David Chalmers, one of the world’s most respected philosophers on the subject, once told me that consciousness could be a fundamental feature of the universe, like space-time or energy. He said it might be tied to the diaphanous, indeterminate workings of the quantum world, or something nonphysical.

These metaphysical accounts are in play because scientists have yet to furnish a satisfactory explanation of consciousness. We know the body’s sensory systems beam information about the external world into our brain, where it’s processed, sequentially, by increasingly sophisticated neural layers. But we don’t know how those signals are integrated into a smooth, continuous world picture, a flow of moments experienced by a roving locus of attention—a “witness,” as Hindu philosophers call it.

Do Animals Have Feelings? (The Atlantic)

As one commenter to the Atlantic article above article on Reddit points out:

“Consciousness” is an archaic sort of catch-all phrase without much empirical definition and usefulness. Sort of like how physicists used to use “ether” to describe things. Of course we’ve upgraded our concepts (and respective language) for a more enriched understanding, not needing the idea of “ether” anymore.

As the Atlantic article referenced above describes, “If one of the wasp’s aquatic ancestors experienced Earth’s first embryonic consciousness, it would have been nothing like our own consciousness.” But the question we’re pondering today is whether even our own remote ancestors had a consciousness very different than our own.

To deal with this question, let’s take a look at the 1979 book, The Origin of Consciousness in the Breakdown of the Bicameral Mind by psychologist Julian Jaynes.

The idea is that in these ancient Mediterranean civilizations, the typical human had one or more ‘gods’ — spirits, agents, separate intelligences — living alongside the conventional ‘self’ in the brain. In other words, the dominant pattern was to maintain two separate, verbally-intelligent control centers in the same brain — one for the ‘gods’ and one for the ‘humans’/’mortals’/’selves’.

Jaynes refers to this arrangement as bicameral, which means two-chambered. That’s because he postulates that the gods and conventional selves were headquartered in the two chambers of the brain — the right and left hemispheres (respectively). I think this is plausible enough, but Jaynes admits that it’s speculative, and it’s not strictly necessary for the rest of his theory. What matters is only that the human brain is (empirically!) capable of something like this arrangement.

In other words, the gods took on some of the functions we think of as the “will” or volition. (But not the conscience; that would only later become a function of a very different kind of god.) Here’s Jaynes:

“The gods were in no sense ‘figments of the imagination’ of anyone. They were man’s volition. They occupied his nervous system… and from stores of admonitory and preceptive experience, transmuted this experience into articulated speech which then ‘told’ the man what to do.”

Think of it this way. Today we have a lot of mental phenomena we can’t really account for, like “intuitions” or “gut feelings.” … Now imagine that “bad feeling” in the form of a voice telling you, “Be careful! Don’t agree to anything!”

Mr. Jaynes’ Wild Ride (Melting Asphalt)

Of the theory, Ran Prieur says, “I’m sure that ancient people had different consciousness than modern people, but Jaynes thought it was *really* different: that they were basically all schizophrenic, hearing voices and seeing visions, which they interpreted as gods.” That is, “Julian Jaynes believed that ancient people experienced their gods as auditory hallucinations.”

The experience of multiple personalities or hearing disembodied voices is extremely common even today, and not only in people suffering from acute schizophrenia:

As much as 10% of the population hear voices at some point in their lives, much higher than the clinical incidence of schizophrenia (1%)…And around 65% of children say they have had ‘imaginary friends’ or toys that play a sort of guardian-angel role in their lives.

Jaynes thought children evolve from bicameral to conscious, much as Piaget thought young children are by nature animist (ie they attribute consciousness to things, and may attribute special consciousness to favourite toy-companions…

Gods, Voice Hearing and the Bicameral Mind (Philosophy for Life)

This Aeon article is a fascinating overview of how psychologists have tried to explain how our “inner voice” integrates our personality over the course of our development. It describes the research of Charles Fernyhough, a leading researcher of inner speech and auditory hallucination at Durham University in the United Kingdom:

It’s possible to inner “hear” your own voice rather than speak your own voice,’ … Here, people listen to their own voice in their heads, perceiving the same sonic characteristics as expanded speech, but without the agency. Such experiences have been recalled by participants as their voice ‘just happening’, as ‘coming out of its own accord’, as ‘taking place’ rather than ‘being uttered’.

Some people passively experience inner speech in voices not their own – essentially as auditory hallucinations that they cannot control. Founding member of the Beach Boys Brian Wilson described the experience to Larry King in an interview on CNN in 2004: ‘I’m going to kill you. I’m going to hurt you’, an inner voice had continually repeated to him since his initial experiences with LSD in the 1960s. The value of understanding such hallucinations is self-evident: they are a hallmark of schizophrenia, a condition that affects almost 24 million people worldwide.

Of great fascination, Fernyhough has concluded that a small but significant part of the general population also experience auditory hallucinations – a phenomenon the researchers call ‘voice hearing’ to distinguish it from schizophrenia. Such voices have been reported by noted individuals throughout history, says Fernyhough. The Greek philosopher Socrates described what he called a ‘daemonic sign’, an inner voice warning him that he was about to make a mistake. Joan of Arc described hearing divine voices since childhood – the same ones that influenced her motivation to help in the siege of Orleans. The 15th-century mystic and autobiographer Margery Kempe wrote about inner conversations with God. Sigmund Freud was not immune: ‘During the days when I was living alone in a foreign city … I quite often heard my name suddenly called by an unmistakable and beloved voice.’

All this leads to another, confounding question: are verbal thoughts reaching awareness just the tip of a mental iceberg, offering only a glimpse of the unconscious mind?

The inner voice (Aeon). Or are verbal thoughts themselves consciousness?

It’s not as crazy as it sounds on first blush. In fact, we commonly experience all sorts of “altered” mental states throughout our lives—hypnotic trances, hallucinations and visions, flow (a.k.a. “being in the zone”), fever delirium, getting stoned or drunk, orgasm, dizziness, out-of-body experiences, and most obviously, dreams and nightmares. Then of course, there are our moods (anger, excitement), and feelings (ennui, jealousy).

Here are a few examples to get started: tunnel vision, runner’s high, ‘flow’, déjà-vu, daydreaming, and orgasm. Then there are spiritual or religious experiences, which are characterized by a suppressed ego and a heightened sense of unity…Then there are the states attending to physical illness — stupor, delirium, lightheadedness, or (in extreme cases) out-of-body experiences. Moods and emotions also correspond to states of consciousness: sadness, fear, surprise, laughter, joy, lust, anxiety, guilt, anger, shame, pride, boredom, and nostalgia.

Drugs put us into all kinds of interesting states…let’s not forget all the weird things that happen around sleep. Drowsiness, hypnagogia, hypnopompia, the Tetris effect, and of course dreaming itself. Every night we spend an hour or so cavorting around in a rich hallucinated fantasyland — and we think nothing of it. But this should give us pause. A brain that’s capable of dreaming should be capable of almost anything.

And all of this is only the tip of the iceberg — the states that most people have experienced at some point in their lives. In fact the brain is capable of many more and stranger things, especially if we admit into our catalogue all the states attending to brain damage, mental illness, torture, and sleep- or sensory-deprivation. Alien hand syndrome and face-blindness are but two examples.

Accepting Deviant Minds (Melting Asphalt)

The author of the above piece speculates that in our age of constant digital distractions and stimulation, we may one day lose our ability to let out mind wander—that is, to daydream (interestingly, the first sign of self-consciousness in the androids of Westworld is the “Reverie,” which is a synonym for daydreaming). If something like that were to happen, future humans would have a hard time trying to understand what the heck daydreaming once was, even though it’s well attested to in literature. People who engaged in such behaviors in the future would be considered “deviant” or “mentally ill” and in need of treatment. Descriptions of this behavior in the past would be considered as some sort of archaic collective psychosis, if not downright fantastical.

It’s not hard to imagine a world — 500 years from now, say — in which adults have lost the ability to daydream. Children, even infants, will grow up immersed in computer-mediated reality and be bombarded every waking moment with ‘optimal’ stimulation. In such a saturated world, a normal human brain may well become incapable of “day-dreaming” — of pulling up anchor from reality and drifting off into aimless daytime fantasies.

I’m not putting this forward as an actual prediction of what’s likely to happen, but merely as a hypothetical “what-if” scenario.

So what would this future society think of the few remaining people who are prone to “day-dreams”? Theirs will be the brains that, by definition, don’t respond in the normal way to environmental conditioning. It will be easy and tempting, then, to classify such people as mentally ill — to diagnose them with Aimless Imagination Disorder, perhaps. And surely there will be drugs to help keep them attending to reality, i.e., to what’s happening on their screens.

Accepting Deviant Minds (Melting Asphalt)

We would treat these daydream believers in much the same way as we treat the people who “still” hear voices in their heads today. For example:

In the 1980s, a Dutch psychiatrist called Marius Romme was treating a 30-year-old voice-hearer called Patsy Hague. She was on tranquilizers, which failed to stop the voices and made it difficult for her to think. She became suicidal.

Then Romme happened to lend her a copy of Jaynes’ book. It made her think perhaps she was not ill so much as ‘living in the wrong century’, and also gave her confidence that her voices were ‘real’, or as real as the invisible God that Romme and others believed in. Hague told Romme: ‘You believe in a God we never see or hear, so why shouldn’t you believe in the voices I really do hear?” Why not listen to what the voices had to say, rather than dismissing them as meaningless pathological symptoms?

Romme set up a meeting between Hague and other voice-hearers, who enthusiastically swapped stories and shared their sense of helplessness, vulnerability and alienation from their society. A sort of peer-led support network emerged, and has continued to blossom since then…

Gods, voice hearing and the bicameral mind (Philosophy for Life)

So who is to say what is ultimately “real” and “not real” when it comes to mental states? Our “sense of self” is just as imaginary a construct as all those ghosts and demons and other assorted imaginary friends, as this writer points out:

The brain…is capable of some pretty weird stuff. It’s not just a blank slate holding symbolic impressions of what’s happening out in the world…

I’ve spent a lot of effort…preparing us not to reject the idea of hallucinated gods out of hand. But now I ask that you keep just one thing in mind as you continue to read about Jaynes — namely, this objective fact about our species:

The human brain is capable of hallucinating voices.

Yes, hallucinated voices are weird — but they really happen. And sometimes we can even be quite cavalier about them. Every night, for example, we spend an hour or so immersed in a rich hallucinated fantasyland — only to dismiss it, when we wake up, as “just a dream.”
Wait a minute. “Just” a dream? If a dream wasn’t perfectly normal, it would be the weirdest thing that ever happened to you.

When we accuse a hallucinated voice, or the spirit that takes over during a possession, of being unreal, on what do we base the accusation? Both voices and spirits are, as we’ve seen, neurologically real — they correspond to a real pattern of neurons capable of exhibiting real intelligence. Both can be treated as agents, i.e., the kind of thing toward which it’s productive to take the intentional stance.

If anything, our objection lies in the fact that voices and spirits don’t have any reality in the world outside our minds. But there’s something else that has all these properties: the self. I, ego, myself, my conscious will. A neurologically real agent with no physical reality outside of the mind.

Hallucinated Gods (Melting Asphalt)

In fact some people even go so far as actively trying to cultivate their inner voice. This is a part of both Eastern esoteric traditions (e.g. Tibetan Buddhism) and Western (e.g. Magick). Many otherwise “sane” people with addictions often describe their addiction as a separate consciousness from their own which “makes” them drink, or do drugs, or whatever, with their “real” selves going along for the ride. These are “drives,” or “sub-personal agents” which our minds possess. Even today we refer to our personal “demons”—a telling expression, I think. Sometimes people even go so far as to give their addictions or inner voices a name. Maybe in the past, they called the voices things like Utu or “Osiris” or “Aten” or “Apollo”:

…there’s no objective sense in which one of your voices could be the “same” as one of my voices. The process of naming/identifying one’s voices is strictly a symbolic, interpretive act — and as such it would have been fraught with social and political implications. There were personal gods, household gods, state and local gods, each a meaningful token of a different kind of loyalty.

No doubt identification was influenced by all sorts of factors in the child’s life: his parents, priests, and peer group; norms about whether it’s OK to ‘invent’ new gods; where he spent his time; where he heard his voices. If a child hallucinated one of his voices with particular strength at the temple of Osiris, while bathing in the imagery, mythology, and personality of Osiris — well, it only makes sense for that voice to ‘be’ Osiris.

Hallucinated Gods (Melting Asphalt)

Nor is this just ancient history. Yesterday I was reading an article in The Guardian about a British lady named Amanda Feilding who is leading a one-woman crusade to legalize psychedelic drugs around the world for use in the treatment of serious mental disorders. Of her childhood, there’s this fascinating tidbit:

Before the light outside goes, Feilding insists that we have a wander around the grounds, where the seeds of her curiosity were sown. Out among the ancient hedges and ponds she points out the mound and tree stump that she believed housed a private god figure; her game, aged five or six, was to find ways to make that god laugh, “that kind of orgasm experience that I think a lot of young children have and then forget”.

Feilding did not forget. She wanted afterwards, she says, to recreate that childlike intensity of experience…As Feilding explains this former life, in digressive fits and starts, fretting a little that she is saying too much, she leads me through the twilit garden, over well-trodden stepping stones, pointing out a pond she dug “based on sacred geometries”, with a half-submerged colonnade as if from a forgotten civilisation…

Amanda Feilding: ‘LSD can get deep down and reset the brain – like shaking up a snow globe’ (The Guardian)

Incidentally, the idea of spirits inhabiting a particular inanimate object or place is called a tutelary deity in theology, and is quite common across cultures. It appears to be an outgrowth of animism:

A tutelary (also tutelar) is a deity or spirit who is a guardian, patron, or protector of a particular place, geographic feature, person, lineage, nation, culture, or occupation. The etymology of “tutelary” expresses the concept of safety, and thus of guardianship. (Wikipedia)

It’s interesting to contemplate the fact that in ancient literature–religious or not–humans are almost always depicted as communicating directly with the deities! For example, in every ancient legal code I’m aware of, the laws were received directly from the gods by the lawgiver, like dictating to a stenographer. Moses is one case, but hardly the only one. What if this was more than just simply colorful metaphor?


Aeon has a fascinating piece up on the origins of monotheism, which seems to have arisen more-or-less simultaneously in both Egypt and in the Hebrew culture. While many have speculated that one must have influenced the other (such as Freud), there is no record of any direct contact.The change in religion happened rapidly, over just a few decades, rather than by gradual evolution, contends the author. What’s especially interesting is the author’s speculation of how a direct communication with the deity brought about the monotheistic revolution:

My theory is that Akhenaten himself very early in his reign (or even just before) experienced a theophany – a dream or some sort of divine manifestation – in which he believed that Aten spoke to him. This encounter launched his movement which took seven to nine years to fully crystallise as exclusive monotheism.

Great idea, but based on what evidence? Mention has already been made of the two major Aten Temples called Gemet Pa-Aten constructed at Karnak and Akhet-Aten. A third temple by the same name was built in Nubia. Three temples with the same name is unprecedented, and suggests that its meaning, ‘The Aten is Found’, was vitally important to the young king’s religious programme. Could the name of the three sanctuaries memorialise the dramatic theophany that set off the revolution?

Akhenaten also uses the same language of discovery to explain how he found the land where he would establish the new city, Akhet-Aten. The aforementioned boundary inscription records Akhenaten’s words when travelling through the area that would become his new capital:

“Look, Aten! The Aten wishes to have [something] made for him as a monument … (namely) Akhet-Aten … It is Aten, my father, [who advised me] concerning it so it could be made for him as Akhet-Aten.”

Later in the same inscription, the king again repeats the line: ‘It is my father Aten who advised me concerning it.’ These texts point to an initial phenomenological event in which the king discovered the new form of the sun-god and then, through a later revelation, Aten disclosed where his Holy See should be built.

The first God (Aeon)

Interestingly, Islamic monotheism began in a similar fashion century when the Arabic merchant and trader Muhammad heard a voice commanding him to “Recite!” That voice was later attributed to the archangel Gabriel, depicted as the messenger of God (Allah) in Islam.

This is naught but a revelation revealed,
taught him by one mighty in power,
very strong; he stood poised
being on the higher horizon,
then drew near and suspended hung,
two bows’-length away, or nearer,
then revealed to His servant that he revealed.

What struck me in the passage above is how it does seem as though Akhenaten is being compelled to do various things by some sort of commanding entity, just as Jaynes hypothesized. Akhenaten even implies that the the god Aten is his “father” (monotheism is suffused with patriarchal ideas). Of course, Moses is also depicted as speaking with God directly in the Scriptures. Again, we “moderns” interpret this stuff as simply poetic license. But if Jaynes’ suppositions are to be taken seriously, it could have been much more than that!

Hammurabi receiving the laws from the sun-god Shamash

Put another way, the “self” may not be something intrinsic to the brain’s function, but something that is wired up in the environment (or not), depending on the circumstances. That is, it’s environmentally constructed. After all, the human brain is uniquely plastic, and, unlike most animals, does much of its “hardwiring” in the first twenty or so years of life outside the womb:

If we accept that the brain is teeming with agency, and thus uniquely hospitable to it, then we can model the self as something that emerges naturally in the course of the brain’s interactions with the world.

In other words, the self may be less of a feature of our brains (planned or designed by our genes), and more of a growth. Every normal human brain placed in the right environment — with sufficient autonomy and potential for social interaction — will grow a self-agent. But if the brain or environment is abnormal or wrong (somehow) or simply different, the self may not turn out as expected.

Imagine a girl raised from infancy in the complete absence of socializing/civilizing contact with other people. The resulting adult will almost certainly have a self concept, e.g., will be able to recognize herself in the mirror. But without language, norms, shame, and social punishment, the agent(s) at the top of her brain hierarchy will certainly not serve a social/PR role. She’ll have no ‘face’, no persona. She’ll be an intelligent creature, yes, but not a person.

Neurons Gone Wild (Melting Asphalt)

A real-world example is that of Helen Keller:

Another way to think of this is to imagine what would be in our heads without language. What would be left of you, had you no language with which to express your experience to yourself? I suggest no “you” at all, beyond the immediacy of existence. In this respect, it is instructive to recall Helen Keller’s words in her essay Before the Soul Dawn:

“Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness.”

“I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory. It enables me to remember that I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. I also recall tactually the fact that never in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation, without wonder or joy or faith.”

And her awakening upon beginning to know language, when she first appreciated the relationship between a finger-movement against her palm and the idea of ‘water’:

“That word startled my soul, and it awoke, full of the spirit of the morning, full of joyous, exultant song. Until that day my mind had been like a darkened chamber, waiting for words to enter and light the lamp, which is thought.”

(As an aside, notice here the striking contrast between the non-world of conscious unconsciousness first described and the bounding, fulsome world of metaphor that springs forth in that final paragraph).

Julian Jaynes and the Analog “I” (Science Philosophy Chat Forums)

In this way, the “self” takes on a structure that depends on (and reflects) the environment it was raised in. Perhaps auditory hallucinations and split personalities are something like vestigial behaviors such as goosebumps, or the palmar grasp reflex, that were part of our brain’s deep evolution. Their manifestation (or lack thereof) depends on the particular environment, genetics, and certain complex personality dispositions.

This presents tantalizing connections with the work of a long-forgotten Soviet psychologist named Lev Vygotsky. He work was eventually suppressed and forgotten in the West until the 1980’s according the Aeon article above. Therefore, Jaynes may not have heard of it. But one wonders whether he could have incorporated Vygotsky’s ideas on the inner voice being a product of the environment into his research:

Lev Vygotsky…said the human mind was shaped by social activity and culture, beginning in childhood. The self, he hypothesised, was forged in what he called the ‘zone of proximal development’, the cognitive territory just beyond reach and impossible to tackle without some help. Children build learning partnerships with adults to master a skill in the zone, said Vygotsky, then go off on their own, speaking aloud to replace the voice of the adult, now gone from the scene. As mastery increases, this ‘self-talk’ becomes internalised and then increasingly muted until it is mostly silent – still part of the ongoing dialogue with oneself, but more intimate and no longer pronounced to the world. This voice – at first uttered aloud but finally only internal – was, from Vygotsky’s perspective, the engine of development and consciousness itself.

The Inner Voice (Aeon)

So is the “integrated self,” with its inner voice simply a bunch of neurons firing in the brain, or is it a product of particular environmental circumstances? And did it emerge as the dominant mental paradigm fairly recently in recorded history, perhaps as recently as the Bronze Age? And, prior to that, was our inner voice considered to be a numinous experience by ancient peoples, one that they related to the only way they could (because of theory of mind)–as another sort of living being (daemons, manes, spirits, angels, jinn, elves, and so forth)?

We’ll be considering that next time. But, before that, we need to consider what we talk about when we talk about consciousness.

Snow Day

I came across an older episode of Tangentially Speaking on my computer, so I’ll lazily recycle Chris Ryan’s words:

[8:16] “Thank you to all of you who wrote to me expressing your opinions and your encouragement and your hesitations and everything else after the intro to the last episode where I talked about the conundrum and the conflicts that I’m feeling about this project. It is so cool, really. Really, it’s so cool for people to be writing to me from, you know, dropping out of the sky and expressing your support and your concern. I feel like I’ve got so many friends that I haven’t met, and that’s a wonderful, beautiful feeling to have. So I really–thank you for that.”

And I echo his sentiments. Thank you for all your letters of support and encouragement.

Right now, we’ve just been hit with another snowstorm; this one has already dumped well over a foot based on when I walked out my door. I’m at Hi-Fi Cafe right now, but I have a long afternoon of shoveling when I get home, and somehow figuring out how to shovel out my mom’s house as well. On a lighter note, I’m apparently officially their most dedicated customer:

At the moment I’m just enjoying the time off and not having to commute across town in blizzard conditions. I’ve really got to get out of here. Some of you who wrote to me are Milwaukeeans/ex-Milwaukeeans/Wisconisinites etc. Wherever you are, take care. Honestly, I don’t know how you (we) do it anymore.

My internet is now restored, so I’ll have time to respond to all of you hopefully over the rest of the week. Before I do anything, I’ve got to close out my mother’s estate, which is still a ton of work, even a over year later. It also means selling the house. I have no idea how to do that, but I want to do it as economically as possible since I’ll probably have to live on the proceeds. Right now I’m looking at For Sale by Owner. But with the epic weather we’ve been experiencing (which seems to happen every year now), it will most likely be Spring before I can realistically even think of hitting the road to anywhere (When I say ‘Spring’ I mean on the calendar—Wisconsin has no true spring. It goes from cloudy and 30-40 degrees to 70’s and sunny sometime around late May/Early June. Those of you who live here will know what I’m talking about).

I’ve got some thoughts loosely based on a post I wrote on Ran Prieur’s subreddit, which I see he has addressed on his site. I should get that up soon. Incidentally, would anybody be interested in a r/hipcriminals subReddit for discussion of posts?

I’ve also seen a couple of articles on the 200th birthday of John Ruskin, whose thoughts, as I learn more about them, echo my own in many of ways. So here’s a quote that’s apt for clearing out an estate. Turns out, getting rid of everything you own is surprisingly *hard*:

“Every increased possession loads us with new weariness.”

Happy 200th birthday, John Ruskin (Lloyd Alter, Treehugger)

Was Ruskin the most important man of the last 200 years? (BBC)

Apparently, the new Marie Kondo television program has got people saying goodbye to their possessions and giving them away to thrift stores en masse. This is leading to a massive glut, which I’ve definitely noticed myself. Consignment stores are throwing in the towel left and right and often going out of business entirely. Antique stores are flooded and have signs outside their stores stating that they won’t buy anything from anyone, ever. Even thrift stores are becoming reluctant to take more stuff. The antiques my mom collected are now practically worthless and tough to get rid of. Not great for me right now, but great for changing social mores away from overconsumption.

Why are antiques now so cheap? (Marginal Revolution)

Related, this comments thread on Naked Capitalism:

This article reminded me of another trend but that is more long term. We had to move my mother out of her unit not that long ago as she was too old to stay there and had already broken her hip and had to wait until somebody checked on her to find her.
We had to get rid of most of her stuff as she could not take it to the nursing home she was moving to. A lot of the smaller goods and trinkets we took to charity shops and I saw how the shelves were almost overflowing with such good quality things. And I mean good quality stuff.
It then occurred to me that nearly all her generation was either passing away or downgrading or moving to a retirement home to live. As these people had to downsize in any case, or had family having to get rid of their things, a lot of this stuff was going to such charity shops which explained possibly why there was so much stuff there. As the baby-boomers age even more, I would expect the tempo to increase here.
[…]
The very same mechanism is driving the collectibles market in a downwards spiral of too much inventory coming on the market as baby boomers and their parents downsize, while millennials could care less about the debris field they leave behind, they aren’t into it.
[…]
More and more of that stuff will end up in landfills since fewer and fewer in the generations after the Baby Boomers have the 3-level homes to house all that shit. 2 bedroom apartments if they’re lucky, or 1 small room in a motel. Or homeless. As George Carlin said, “Houses are containers for holding crap,” and millennials won’t be buying those 3-level, crap-holders.
[…]
One of my siblings is a major hoarder (real problem), who also sells antiques and used items in a number of different stores in their area (western PA). The three stores where they work are LOADED to the gills, and I regularly hear tales of lots of “looking” but not much buying.
Indeed, it’s true that as the older generation – now mainly the Korean War gang – moves into retirement/nursing homes, they are purging their stuff, and the boomers aren’t far behind.
[…]
What is this ongoing trend going to do to Target and other retailers of cheap Chinese crap as well as clothing sales? It’s got to hurt them. Not only are younger people buying less, but now they have all this almost free higher quality stuff available.

Links 2/7/19 (Naked Capitalism)

The things you find in the time capsule:

Yep, once upon a time women were terrified of being too skinny and looking for ways to help them put put on weight. Then we discovered putting sugar in everything. Now our magic beans do other things.

Postscript: a lot of you who wrote to me are dealing with various health conditions. I was in a very serious relationship with a woman who had severe Fibromyaligia—to the point where she was basically disabled. So I have some idea what it’s like. I’m glad I could provide you with some distraction and food for thought. Take care.

P.P.S:A few of you mentioned Morris Berman’s writings. Funny enough, not only have I followed Morris Berman’s blog and read a number of his books, he was actually one of the first commenters to this very blog! All the way back in 2011, shortly after I first started working on the hipCrime Vocab, I wrote a 9-part series called Is Japan the Future?. I posted link to it on http://morrisberman.blogspot.com/. Berman was actually working on a book about Japan at the time (which has since been published), and wrote to me encouraging me to publish what I wrote in book form. It was very kind of him to do so, especially given that I’m essentially nobody. I know he’s moved to Mexico and seems to like it there.

Also, some people suggested teaching English abroad. I’ve actually contemplated that idea before. A few years ago I visited WESLI in Madison. Anyone have any experience/thoughts on their program? Thanks.