Fun Facts

It’s time for another edition of Fun Facts!

One in 20 patients remain aware but paralysed during major medical procedures – though the vast majority will not remember it afterwards.

We still don’t know exactly why anaesthetic agents dim our consciousness.

http://www.bbc.com/future/story/20190313-what-happens-when-anaesthesia-fails

In 24 States, 50% or more of babies are born on Medicaid; New Mexico leads the nation with 72%.
https://www.cnsnews.com/news/article/terence-p-jeffrey/24-states-50-babies-born-medicaid

White and middle-aged Americans are the demographic groups most at risk for suicide. Between 1999 and 2017, U.S. suicide rates increased by 45 percent for men ages 45 to 64 and by 62 percent for women in that age group.

In 2014, the most recent year such breakdowns of data are available, men with only a high school diploma were twice as likely to die by suicide as men with a college degree. And although middle-aged men of all educational groups experienced rising suicide rates during the Great Recession, 2007 to 2010, since then rates for college-educated men have slightly declined while those for men with only a high school diploma have continued to rise.
https://www.washingtonpost.com/outlook/the-dangerous-shifting-cultural-narratives-around-suicide/2019/03/21/7277946e-4bf5-11e9-93d0-64dbcf38ba41_story.html?noredirect=on&utm_term=.ce661ebcfe7e

Suicidal behavior has nearly doubled among children aged 5 to 18, with suicidal thoughts and attempts leading to more than 1.1 million ER visits in 2015 — up from about 580,000 in 2007, according to an analysis of U.S. data.
https://www.reddit.com/r/science/comments/bb0gr6/suicidal_behavior_has_nearly_doubled_among/

Graduate students experience depression and anxiety at six times the rate of the general population.
https://marginalrevolution.com/marginalrevolution/2019/04/sentences-to-ponder-109.html

Suicide attempts by self-poisoning have more than doubled in teens and young adults in the last decade in the U.S., and more than tripled for girls and young women.
https://www.sciencedaily.com/releases/2019/05/190502075817.htm

Seventy-one percent of Americans age 17 to 24 are ineligible to join the military, primarily because they are too overweight or too poorly educated, or they have a record of serious crime or drug abuse.
https://www.reddit.com/r/todayilearned/comments/bb9c03/til_that_71_percent_of_americans_age_17_to_24_are/

“This analysis of a large, nationwide sample demonstrated that Emergency Department visits for SA/SI (suicide attempt/suicide ideation) doubled among youth between 2007 and 2015.”
https://jamanetwork.com/journals/jamapediatrics/article-abstract/2730063
Nothing to see here. Everything’s fine and getting better and better under Neolobsterism Neoliberalism!

Forty-four percent of the decline in male workforce participation from 2001 to 2015 is due to opioids.
https://www.economy.com/dismal/analysis/datapoints/350176/Opioids-and-the-US-Labor-Market/

The 2008 financial crash directly led to 6566 suicides, causing more deaths than the September 11 attacks.
https://www.reddit.com/r/todayilearned/comments/bfacoa/til_the_2008_financial_crash_directly_led_to_6566/


Lawns, by acreage, are the United States’ largest irrigated crop, surpassing corn.
https://www.curbed.com/2019/3/13/18262285/mcmansion-hell-kate-wagner-lawn-care-mowing

Between 1980 and 2000, about 100 million hectares of tropical forests — roughly the area of France and Germany combined — were converted for grazing, monoculture plantations like palm oil, or short-term subsistence farming.
https://thetyee.ca/Opinion/2019/05/09/Traditional-Economics-Biodiversity-Natural-Accounting/

70% of all birds currently alive are chicken and other poultry, enough to create their own geological strata.
http://www.bbc.com/future/story/20190502-why-the-post-natural-age-could-be-strange-and-beautiful

Globally, one in five deaths are associated with poor diet.
https://www.sciencedaily.com/releases/2019/04/190403193702.htm

The share of 25-34 year-olds who are college graduates was 11% in 1960, 27.5% in 2000, and 35.6% in 2017.
https://conversableeconomist.blogspot.com/2019/03/some-us-social-indicators-since-1960.html

Millennial households had an average net worth of about $92,000 in 2016, nearly 40% less than Gen X households in 2001, adjusted for inflation, and about 20% less than baby boomer households in 1989.
https://marginalrevolution.com/marginalrevolution/2019/05/the-great-reset-applied-to-millennials.html

Prisoners employed in manufacturing constitute 4.2% of total U.S. manufacturing employment in 2005.
https://www.ineteconomics.org/research/research-papers/economic-consequences-of-the-u-s-convict-labor-system

Total national health expenditures were 5.0% of GDP in 1960, and 17.9% of GDP in 2017.
https://conversableeconomist.blogspot.com/2019/03/some-us-social-indicators-since-1960.html

Health care CEO pay topped $1 billion in 2018.
https://www.axios.com/health-care-ceo-salaries-2018-3aff66cd-8723-4ec8-abe8-dd19edd24390.html


About half the population [of the United States] in 1915 lived in rural areas, meaning areas with fewer than 2,500 residents. In 2010, by contrast, only 1 in 5 people lived in a rural area…In 1915, about 78 percent of U.S.-born individuals were living in the state in which they had been born, compared with 59 percent in 2010.
http://conversableeconomist.blogspot.com/2016/02/the-life-of-us-workers-100-years-ago.html

One-sixth of the U.S. population lives in this megalopolis (the Acela Corridor).
http://tywkiwdbi.blogspot.com/2019/04/one-sixth-of-us-population-lives-in.html

The number of people worldwide who drink water that is contaminated with feces is 1.8 billion. (But they have cell phones!!!!)
https://www.treehugger.com/environmental-policy/world-water-disaster-numbers.html

Americans are routinely exposed to dangerous chemicals that have long been banned in countries such as the UK, Germany and France. Of the 40,000 chemicals used in consumer products in the US, according to the EPA, only one percent have been tested for human safety.
https://www.theguardian.com/us-news/2019/may/22/is-modern-life-poisoning-me-i-took-the-tests-to-find-out

More than 90 percent of Americans have pesticides or their byproducts in their bodies.
https://www.thenation.com/article/pesticides-farmworkers-agriculture/

Chicago police say that as of 1 March 2019, they had seized more than 1,600 illegal guns this year. The figures equate to one illegal gun taken off the street every 53 minutes.
https://www.bbc.com/news/world-us-canada-47856992

Somebody once estimated that if bees got paid minimum wage, a jar of honey would cost <$1,000
Unsourced – but sounds plausible to me.

Half of England is owned by less than 1% of the population
https://www.theguardian.com/money/2019/apr/17/who-owns-england-thousand-secret-landowners-author

Ninety-five percent of the trading volume in Bitcoin is fake, ginned up through techniques like “wash trading” where a person buys and sells an asset at the same time.
https://boingboing.net/2019/03/28/grifter-galt-gulch.html

“Can you give me a hand moving these?”

People are pooping more than ever on the streets of San Francisco.
https://www.sfgate.com/technology/businessinsider/article/People-are-pooping-more-than-ever-on-the-streets-13778680.php

The average rent in NYC went from 15% of average income in 1950 to 65% today.
https://slatestarcodex.com/2019/04/24/links-4-19/

The Ancient Egyptians had a pregnancy test that actually worked.
https://history.nih.gov/exhibits/thinblueline/timeline.html

In 1996, a federal welfare reform prohibited convicted drug felons from ever obtaining food stamps. The ban increased recidivism among drug felons. The increase is driven by financially motivated crimes, suggesting that ex-convicts returned to crime to make up for the lost transfer income.
https://www.reddit.com/r/science/comments/bk6voe/in_1996_a_federal_welfare_reform_prohibited/

Abortion laws in Saudi Arabia are more forgiving than in Alabama.
https://www.commondreams.org/news/2019/05/22/abortion-laws-saudi-arabia-more-forgiving-alabama-report

Apple purchases a new company every two to three weeks on average, and has bought between 20 and 25 companies in the last six months alone.
https://boingboing.net/2019/05/06/robert-bork-payout.html

In 2017, the world subsidized fossil fuels by $5.2 trillion, equal to roughly 6.5 percent of global GDP. That’s up half a trillion dollars from 2015, when global subsidies stood at $4.7 trillion, according to the IMF. If governments had only accounted for these subsidies and priced fossil fuels at their “fully efficient levels” in 2015, then worldwide carbon emissions would have been 28 percent lower, and deaths due to toxic air pollution 46 percent lower.
https://www.theatlantic.com/science/archive/2019/05/how-much-does-world-subsidize-oil-coal-and-gas/589000/

Government action to slash pollution before the Beijing Olympics in 2008 led to a rise in birth weights in the city.
https://www.theguardian.com/environment/ng-interactive/2019/may/17/air-pollution-may-be-damaging-every-organ-and-cell-in-the-body-finds-global-review

The United States has spent more subsidizing fossil fuels in recent years than it has on defense spending.
https://www.rollingstone.com/politics/politics-news/fossil-fuel-subsidies-pentagon-spending-imf-report-833035/

Although James Watt developed the coal-powered steam engine in 1776, coal supplied less than 5 percent of the planet’s energy until 1840, and it didn’t reach 50 percent until 1900.
https://www.nybooks.com/articles/2019/04/04/future-without-fossil-fuels/

When Ashton Kutcher’s film “Dude Where’s My Car” was released, a movie reviewer for USA Today wrote “Any civilization that can produce a movie this stupid deserves to be hit by famine and pestilence.”
https://www.reddit.com/r/todayilearned/comments/beohel/til_when_ashton_kutchers_film_dude_wheres_my_car/

Random GOT Thoughts


I’m no Game of Thrones expert or superfan, but I thought I’d share a few thoughts considering that the show is ending this weekend.

Now, I should note that I haven’t seen any episodes this season. I’m not technically inclined enough, nor motivated enough, to do what everyone else apparently does and just steal it off the streaming torrents on the Interwebs.

My television access consists solely of a flat screen TV hooked up to a DVD player in my basement, where I watch DVDs that are exclusively rented from the public library, so that I pay zero $$ for my entertainment. But that means I’m usually a bit behind on current teevee—I just watched the last season of Poldark that aired on Masterpiece last year, for example. But the huge 2-year gap between series meant that I was all caught up on all the latest developments on GOT going into this final season. Thus, I was eager to see how they would tie up all those various dangling plot threads and resolve the multiplicity of character and story arcs.

Given all the hullabaloo surrounding the ending of the show, I figured there was no way I was going to isolate myself from the plot developments for the year or so that it will take for it to come out on DVD and wind up on my local library’s DVD reservations list. And so I went the opposite route: I decided to actively read as much as I could about it. Because, let’s face it, I want to know how the story ends just as much as everyone else out there!

And so, my experience of the show has been all second-hand (articles, reviews, and YouTube videos), so I will necessarily be hampered by that. But, having said that, I’m amazed at how negative the coverage has been in the main. Fans, it seems, are quite sore and very disappointed. Unreasonably, I think (although maybe I’ll change my mind once I actually get to see it).

(And, and it goes without saying, that if you ARE one of the people who IS trying to actively avoid any spoilers about what happens this season—and actually think you can accomplish this—then stop reading right now, and do not read any further!!!).

If you’re still here, here a few random thoughts I had about the penultimate episode, with the caveats above.

1. In my opinion, the destruction of King’s Landing by fire seems like the most logical thing to happen, and really is a masterstroke for many reasons. After all, the whole series of books was entitled “A Song of Ice and Fire.” by its author. So, we had the “ice” aspect of the series resolve itself in episode three with the defeat of the Night King and his hordes of ice-zombies, and so now it is time for the “fire” to play its integral role in the plot with the destruction of King’s landing by dragonfire. Song of Ice and Fire, get it?

2. But why was the destruction of King’s landing so meaningful? Why were there atrocities committed? And why was all of that necessary given the logic of the books?

Well, in the medieval-fantasy genre, war and warfare have traditionally been portrayed as “noble” and “heroic”—as the climax a conflict between “pure” good and “absolute” evil. Look at the final battle in The Lord of the Rings, for example. The standard trope is, the “rightful ruler” takes his (usually his) place on the throne; is just and benevolent; all conflict ceases; and they all live happily ever after, et cetera, et cetera.

George R.R. Martin’s books, by contrast, have always been about bringing a sense of realism to the genre and subverting the usual fantasy tropes. And how could there be a better one than this? After all, this is what happens in actual war. It’s a murderous, bloody, and brutal affair. And it was during the ancient and Medieval periods just as surely as it is today. Isn’t it about time that the fantasy genre grow up and acknowledge this gruesome reality?

George R.R. Martin’s novels have always been steeped in history from the very beginning. I believe that a knowledge of history is not just useful, but, in fact, essential in understanding his writing. With that in mind, there were two major historical events running through my mind as I read the accounts of episode five online. The first was the famous Siege of Jerusalem by the Crusaders in 1099. Wikipedia summarizes:

Atrocities committed against the inhabitants of cities taken by storm after a siege were normal in ancient and medieval warfare. The Crusaders had already done so at Antioch, and Fatimids had done so themselves at Taormina, at Rometta, and at Tyre. However, the massacre of the inhabitants of Jerusalem may have exceeded even these standards. Historian Michael Hull has suggested this was a matter of deliberate policy rather than simple bloodlust, to remove the “contamination of pagan superstition” (quoting Fulcher of Chartres) and to reform Jerusalem as a strictly Christian city…”

According to Raymond of Aguilers, also writing solely of the Temple Mount area, “in the Temple and porch of Solomon men rode in blood up to their knees and bridle reins.” Writing about the Temple Mount area alone, Fulcher of Chartres, who was not an eyewitness to the Jerusalem siege because he had stayed with Baldwin in Edessa at the time, says: “In this temple 10,000 were killed. Indeed, if you had been there you would have seen our feet coloured to our ankles with the blood of the slain. But what more shall I relate? None of them were left alive; neither women nor children were spared.”

Siege of Jerusalem (1099) (Wikipedia)

The second historical event I thought of is far more recent—the infamous “Rape of Nanjing” that took place during the Second World War:

Following the capture of Nanjing, a massacre, which was perpetrated by the Imperial Japanese Army, led to the deaths of up to 60,000 residents in the city, a figure difficult to precisely calculate due to the many bodies deliberately burnt, buried in mass graves, or deposited in the Yangtze River…B. Campbell, in an article published in the journal Sociological Theory, has described the Nanjing Massacre as a genocide, given the fact that residents were still slaughtered en masse during the aftermath, despite the successful and certain outcome in battle.

The International Military Tribunal for the Far East estimated that 20,000 women, including some children and the elderly, were raped during the occupation. A large number of rapes were done systematically by the Japanese soldiers as they went from door to door, searching for girls, with many women being captured and gang raped. The women were often killed immediately after being raped, often through explicit mutilation or by penetrating vaginas with bayonets, long sticks of bamboo, or other objects. Young children were not exempt from these atrocities and were cut open to allow Japanese soldiers to rape them…

Nanjing Massacre (Wikipedia)

That is what real warfare looks like. This is what happens. This. This is where the untrammeled pursuit of power by flawed human beings inevitably leads. Always. Is it any wonder that this was the core message that George R. R. Martin (and the showrunners) wished to convey here at the end?

3. But by far the most important historical event alluded to, more than the others in my opinion, must be the firebombing of Dresden (albeit by planes instead of a dragon). I haven’t seen this mentioned anywhere else, but I’m sure someone else must have picked up on it.

Why this event in particular? Well, one reason is the fire aspect, obviously. But also it was one of the most destructive military events of the entire Second World War, surpassing even the nuclear weapons dropped on Hiroshima and Nagasaki (only the aerial bombings of Hamburg and Tokyo unleashed more destruction).

Consider this statement by one of the survivors of the bombing:

To my left I suddenly see a woman. I can see her to this day and shall never forget it. She carries a bundle in her arms. It is a baby. She runs, she falls, and the child flies in an arc into the fire.

Suddenly, I saw people again, right in front of me. They scream and gesticulate with their hands, and then—to my utter horror and amazement—I see how one after the other they simply seem to let themselves drop to the ground. (Today I know that these unfortunate people were the victims of lack of oxygen). They fainted and then burnt to cinders.

Insane fear grips me and from then on I repeat one simple sentence to myself continuously: “I don’t want to burn to death”. I do not know how many people I fell over. I know only one thing: that I must not burn.

— Margaret Freyer, survivor.

My guess is that this is the exact feeling intended to be conveyed by the collective writers of GOT in the penultimate episode.

Although I haven’t seen the entire episode, I have seen some of the imagery and clips online. In terms of visuals, I was reminded not of only of the iconic photos of the destruction of Hiroshima, but also of the visuals from the German film Downfall (Der Untergang) showing the unfathomable devastation of Berlin when the war had finally concluded:

From the teaser trailer released online for season 8, episode six, it looks like much of this same visual imagery will be used by the show’s artistic team in the ultimate episode as well. We’ll see. Again, the message is clear: This is what war is really like, and often the people most devastated by the power game aren’t the ones who are playing it.

The second reason is the moral ambiguity of the attack. While it’s true that Germany hadn’t unconditionally surrendered (unlike King’s Landing), the bombing of this city has been controversially considered to be tantamount to a war crime by a few historians.

Several factors have made the bombing a unique point of contention and debate. First among these are the Nazi government’s exaggerated claims immediately afterwards, which drew upon the beauty of the city, its importance as a cultural icon; the deliberate creation of a firestorm; the number of victims; the extent to which it was a necessary military target; and the fact that it was attacked toward the end of the war, raising the question of whether the bombing was needed to hasten the end…Several researchers claim not all of the communications infrastructure, such as the bridges, were targeted, nor were the extensive industrial areas outside the city center. Critics of the bombing have claimed that Dresden was a cultural landmark of little or no strategic significance, and that the attacks were indiscriminate area bombing and not proportionate to the military gains.

Bombing of Dresden in World War II (Wikipedia)

Yes, war is often morally ambiguous, another message I’m sure Martin et. al. were eager to convey here in the final season. I wouldn’t be surprised if the final episode (episode 6) featured attempts by certain actors to “whitewash history” and claim that the destruction of King’s Landing was “necessary” and “inevitable” in the aftermath. History is written by the victors, after all. We’ll see.

And, the third reason I think the firebombing of Dresden is the template for the conclusion of the show (and the books) is a literary reference to Kurt Vonnegut’s masterpiece Slaughterhouse Five.

Now, George R.R. Martin has been a writer his whole life, and he is a keen student of the science fiction/fantasy genre in all of its manifestations. He is clearly a smart guy who knows his history and his literature. There’s no way he’s not intimately familiar with Slaughterhouse Five, and would want to honor the late Kurt Vonnegut’s masterful meditation on the senseless horrors of war. The centerpiece of Vonnegut’s book is, of course, the firebombing of Dresden which the young Vonnegut experienced as a German prisoner of war.

There may be a few other references too. I wonder if Bran’s mental time traveling is somehow analogous to Billy Pilgrim’s becoming “unstuck in time”. That’s just a speculation, of course. But Slaughterhouse Five is one of the few books that can be arguably called “science fiction” to have transcended the genre—to have become something more. It rises to become a work of great literature, a mediation on the fundamental human experience. And I’m pretty sure that this is what Martin is aspiring to as well. So it’s no stretch to imagine that he would want both to appropriate—and simultaneously pay homage to—Vonnegut’s masterwork in concluding his own epic fantasy series.

So, in my opinion, that’s why things unfolded the way that they did. The final episode will probably make this intent more clear (or not, we’ll see).

Now, as for the rather abrupt and jarring transition of Daenerys Targaryen’s character; well, I agree with those who see it as a unfortunate contrivance given the fact that the writers were forced by circumstances to wrap up the series in a very limited amount of time. If you’re a literary author, you can spend hundreds of pages and ten years of writing to bring this about in a logically consistent manner. If you’re writing a TV series on a very tight schedule, by contrast, and millions of dollars are at stake, you have to bang out a conclusion whether it is ideal or not. That’s just the reality.

Clearly this outcome had been hinted at all along during series, albeit subtly and ambiguously. And there were several events featured prominently this season that were clearly intended by the writers to undermine Daenerys’ mental state and set her up for her character’s troubling final turn.

But it fits in well with Martin’s sensibilities throughout the entire series–that anyone who fashions themselves as a “savior” turns out, in the end, to be a monster. Recall Nietzsche’s famous dictum: “Beware that, when fighting monsters, you yourself do not become a monster… for when you gaze long into the abyss. The abyss gazes also into you.” I’m sure Martin is familiar with Nietzsche as well as Vonnegut. To date, the only consistently “noble” character has been Jon Snow, who has repeatedly insisted that he is not interested in wielding power, and just wants to live a relatively “normal” life beyond the wall once the war is over. No doubt there’s an intentional statement there.

And that has been a central message of the show from the very beginning. It’s what I believe makes it a truly great work of art (along with the brilliant characters and complex world-building). I feel like many of the fans got so caught up in messy details that they forget about the big picture—what I would argue is the central “message” of Martin’s entire Song of Ice and Fire series of novels in my view.

Which is this: There is no “nobility” in the naked pursuit of power. Once you seek to acquire power over others, no matter how noble your intentions may be at the outset, you will inevitably be forced to do things that are immoral. That’s the nature of the game.

And in these dark times, that’s an important message to convey.

Here are a couple of articles I enjoyed about the show’s final season:

Stop the nitpicking! This season of Game of Thrones is miraculous (The Guardian)

The Real Reason Fans Hate the Last Season of Game of Thrones (Scientific American)

The Anthropology of Game of Thrones (YouTube)

Standing on Zanzibar

Given the title of this blog, I couldn’t not post this article from the BBC:

We look to fiction for eternal truths about our world and timeless insights into the human condition – either that or giddy escapism. But sometimes, in striving to achieve any or all of the above, a novelist will use the future as their backdrop; and just occasionally, they’ll predict what’s to come with uncanny accuracy. They can sit down at their desk and correctly envisage, for instance, how generations to come will be travelling, relaxing, communicating. And in the case of John Brunner, a sci-fi author who grew up in an era when the word ‘wireless’ still meant radio – the specificity of his imaginings retains its power to startle.

In his 1968 novel Stand on Zanzibar, for instance, he peers ahead to imagine life in 2010, correctly forecasting wearable technology, Viagra, video calls, same-sex marriage, the legalisation of cannabis, and the proliferation of mass shootings. Equally compelling, however – and even more instructive – is the process by which Brunner constructed this society of his future and our present…

Ultimately, it is Brunner’s process that makes Zanzibar’s crystal-ball-gazing predictions so enduringly fascinating: he arrived at them via a combination of careful observation, listening and reading – that and a zany imagination. He was looking to the future, but it was only by being fully immersed in the present that he was able to see it with such unnerving clarity, effectively turning his typewriter into a time machine…

The 1968 sci-fi that spookily predicted today (BBC). In weird time-travel-paradox sort-of-way, maybe he even predicted this blog (which actually debuted in 2011).

(NEGRO. Member of a subgroup of the human race who hails, or whose ancestors hailed, from a chunk of land nicknamed—not by its residents—Africa. Superior to the Caucasian in that Negroes did not invent nuclear weapons, the automobile, Christianity, nerve gas, the concentration camp, military, or the megalopolis.

The Hipcrime Vocab by Chad C. Mulligan) [SOZ; pp. 85-86]

HUMAN BEING. You’re one. At least, if you aren’t, you know you’re a Martian or a trained dolphin or Shalmaneser.
(If you want me to tell you more than that, you’re out of luck. There’s nothing more anybody can tell you.

The Hipcrime Vocab by Chad C. Mulligan) [SOZ; p. 41]

Anthropology/Archaeology Roundup

I have a backlog of anthropology/ancient history news articles, so that means it’s finally time to clear out the links:

Human ancestors were ‘grounded,’ new analysis shows (Science Daily). See my previous article. It’s long been assumed that we descended from tree-dwelling apes, and that ground-based locomotion evolved much later-possibly independently—in ancestral humans and apes. But this new study indicates that the common ancestor of chimps, bonobos and humans had already evolved for mobility on the ground:

In his research, Prang ascertained the relative length proportions of multiple bones in the primate foot skeleton to evaluate the relationship between species’ movement (locomotion) and their skeletal characteristics (morphology). In addition, drawing upon the Ardi fossils, he used statistical methods to reconstruct or estimate what the common ancestor of humans and chimpanzees might have looked like.

Here, he found that the African apes show a clear signal of being adapted to ground-living. The results also reveal that the Ardi foot and the estimated morphology of the human-chimpanzee last common ancestor is most similar to these African ape species.

“Therefore, humans evolved from an ancestor that had adaptations to living on the ground, perhaps not unlike those found in African apes,” Prang concludes. “These findings suggest that human bipedalism was derived from a form of locomotion similar to that of living African apes, which contrasts with the original interpretation of these fossils.”

The original interpretation of the Ardi foot fossils, published in 2009, suggested that its foot was more monkey-like than chimpanzee- or gorilla-like. The implication of this interpretation is that many of the features shared by living great apes (chimpanzees, bonobos, gorillas, and orangutans) in their foot and elsewhere must have evolved independently in each lineage — in a different time and place.

And, related: Humanity’s Early Ancestors Were Upright Walking Apes (Discover Magazine)

…bipedalism, or two-legged locomotion, was the first major evolutionary change in human ancestors, which is evident from bones. Other distinguishing features, like big brains, small molars and handcrafted stone tools, came millions of years later. Therefore, to find early members of our lineage, anthropologists look for ancient apes with skeletal traits indicative of habitual bipedalism — they regularly walked upright.So who were the first bipedal apes and would you recognize them as relatives?…

History of the Horse: Ancient DNA Reveals Lost Lineages (Discover Magazine)

For the last 5,000 or so years, the horse has done more than any other animal to affect the course of human history (sorry, dogs and cats…it’s not even close).

Horses have hauled us and our stuff (including our languages, cultures and diseases) all over the world. They’ve charged into battle, plowed fields and crisscrossed continents delivering news. And, after death, they’ve been broken down into a variety of useful products, from hides to food.

But the new research found that some of the traits we associate most closely with horses have only recently evolved. For example, the genetic variations associated with locomotive speed appear to be the product of selective breeding only in the last 1,000 or so years.

The European horse breeds nearly went extinct, and genetic diversity in horses is declining in general:

The genome-wide analysis also found that established populations of European horses were nearly wiped out in the 7th to 9th centuries thanks to the arrival and spread of horses with a Persian pedigree…

Beginning about 2,000 years ago, the diversity of the Y chromosome in domestic horses began to decline, likely because breeders were increasingly choosing specific stallions as studs. But the researchers also found that horses’ overall genetic diversity has fallen by about 16 percent just in the last 200 years, probably because of increased emphasis on the “purity” of a line.

The domestication of horses remains something of a mystery, but I find this author’s speculation a likely possibility:

Archaeological evidence of horse domestication points to the Botai culture of Central Asia at least 5,500 years ago, but those horses are genetically related to the wild Przewalski’s horse, not domestic horses. Various studies have suggested different areas of central and southwestern Eurasia as the homeland of the domestic horse, but the matter remains unresolved.

Personally, I’m betting that the earliest history of the horse took a course not unlike that of the dog: A sweeping 2016 paleogenetic study showed that dogs were domesticated more than once, at about the same time but in different locations, though one lineage eventually dominated.

BONUS: Here’s Brian Fagan on horse domestication from The Intimate Bond: How Animals Shaped Human History:

Capturing or controlling such fast-moving, potentially ferocious animals as tarpans would never have been easy, especially on the open steppe, where close stalking 1s difficult at best for someone on foot armed with only a bow and arrow or a spear. So the hunters often turned to carefully orchestrated ambushes and cooperative drives. Such hunts required dealing with horses at close quarters. Such circumstances must have been commonplace enough, so much so that hunters may have gotten into the habit of corralling some of the trapped mares alive or even hobbling them, allowing them to feed in captivity until it was time to kill them. They may have focused on slower-moving pregnant mares, which would then give birth in captivity. Their foals would have been more amenable to control if brought up in captivity from the beginning. This may have been how domestication took hold, through loose management of growing herds of mares, who still bred with wild stallions.

This was not, of course, the first time that people had wrestled with the problem of domesticating large, often frisky animals. The first groups to domesticate horses were accustomed to cattle management. Like cattle, horses travel in bands. As with cattle, too, there’s a lead female, who decides the route for the day. The others follow. Cattle and sheepherders had known for centuries that to control the leader was to control the herd, whether a flock of sheep or a small group of cattle. p. 138

No one knows precisely where horses were first domesticated, but if genetics is any guide, they were tamed in many locations between eastern Europe and the Caucasus. We will never find a genetically ancestral mare, the “Eve,” as it were, of Equus caballus, for crossbreeding with wild stallions was commonplace. With genetics inconclusive, we have to fall back on archaeological clues. These are contradictory at best. As is the case with cattle, it’s a question of interpreting slaughter curves compiled from jaws and teeth. They can tell us the ages of slaughtered beasts, but not necessarily what the patterns mean. Unfortunately, too, there was so much size variation in wild horse populations that diminishing size is an unreliable criterion. p. 139

Quite when people first rode horses is the subject of unending academic debate, largely because its virtually impossible to tell from archaeological finds. At first, people rode their beasts with some form of noseband of leather, rope, or sinew, which rarely survive in archaeological sites. Bits, bridles, and other equipment came into use centuries later than animal domestication. (The earliest bits date to about to 3000 BCE, made of rope, bone, horn, or hardwood. Metal bits appear between 1300 and 1200 BCE, originally made of bronze and later of iron.)’ But just how big a step was this? Perhaps the transition from herding to riding was much less than we think, accustomed as we are to bucking broncos and rodeos, also to terrified pedigree animals whose every instinct is to flee, flail out savagely, or bite. We shouldn’t forget char the first people to ride horses had almost certainly sat on the backs of oxen, which already plowed fields and served as occasional pack animals. Also the first horses to be ridden on the steppe were much smaller than some later breeds. Even more important, those who domesticated them were intimately familiar with the behavior of agitated horses confronted with the unfamiliar. p. 141

We are learning more about the domestication of sheep and goats:

At the ancient settlement of Aşıklı Höyük in central Turkey, archaeological evidence suggests that humans began domesticating sheep and goats around 8450 BC. These practices evolved over the next 1,000 years, until the society became heavily dependent on the beasts for food and other materials.

The team used the urine salts [left behind by humans and animals] to calculate the density of humans and animals at the site over time, estimating that around 10,000 years ago, the density of people and animals occupying the settlement jumped from near zero to approximately one person or animal for every 10 square meters. The results suggest that domestication may have been more rapid than previously expected. They also support the idea that the Neolithic Revolution didn’t have just one birthplace in the Fertile Crescent of the Mideast, but rather occurred across several locations simultaneously.

Switch from hunting to herding recorded in ancient urine (Science Daily)

Horses were just the beginning, however: Humans Domesticated Dogs And Cows. We May Have Also Domesticated Ourselves (Discover Magazine):

According to proponents [of the so-called self-domestication hypothesis, floated by Charles Darwin and formulated by 21st century scholars], as human societies grew in size and complexity, more cooperative, less combative individuals fared better. These behavioral traits are heritable to some extent and also linked with physical traits, such as stress hormone levels, testosterone during development and skull robustness. Tamer individuals more successfully passed on their genes, and so these traits prevailed in the human lineage. Over time, our species became domesticated.

So it’s thought that humans self-domesticated because aggressive individuals were gradually eliminated from society. A happy tale of “survival of the friendliest.”…

This isn’t the first time I’ve written about this, but this article provides a really good overview:

Researchers now know that breeding animals solely for tameness ultimately leads to full domestication. That’s thanks to an ongoing experiment in fox domestication, started in 1959 Soviet Russia…domesticates’ tameness results from smaller adrenal glands, which release less stress hormones. This physiology allows the creatures to stay cool in situations where wild animals would enter a “fight-or-flight” state.

Compared to their wild forbears, domesticated species are less aggressive and fearful towards humans. They often have floppy ears, curly tails, white spots on their heads, and smaller skulls, snouts and teeth. As adults, they look and act more like juveniles of the wild ancestors, and the males appear less masculine…affected features are influenced by or derive from neural crest cells, a specific class of stem cells. In developing vertebrate embryos, these cells form along the back edge of the neural tube (precursor to the brain and spine). They eventually migrate throughout the body, ultimately becoming many types of tissues, including parts of the skull, adrenal glands, teeth and the pigment cells affecting fur.

In domesticates, these tissues seem underdeveloped or smaller than their wild counterparts. A deficit in neural crest cells could explain this difference, i.e. the domestication syndrome.

In Soviet Russia, animal domesticates you, LoL!

In natural settings and experiments, people are far more prosocial. Chimps are reluctant to cooperate, quick to lose their tempers and prone to aggressive outbursts. Humans, in contrast, routinely communicate and cooperate with strangers. Even infants will use gestures to help others solve a task, such as finding a hidden object.

Scientists have also found evidence for self-domestication in human skeletal remains. Based on what’s happened to animal domesticates, it’s predicted that skulls should have become smaller and more feminine looking (in both sexes) with reduced brow ridges. Indeed, that’s what a 2014 Current Anthropology paper found, which measured Homo sapiens skulls from the Stone Age to recent times, about 200,000 years of human evolution. These results agree with previous studies reporting that average skull — and by proxy brain — volume in Homo sapiens has decreased by roughly 10 percent in the past 40,000 years.

It wasn’t all fun and games, however:

Anthropologist Richard Wrangham has argued that ancient societies likely used capital punishment to execute individuals who acted as belligerent bullies and violated community norms. Through sanctioned, punitive killings, troublemakers were weeded out of humanity’s gene poll.

And despite our propensity to cooperate, humans are obviously capable of war, murder and other atrocities towards our own kind. In his 2019 book The Goodness Paradox, Wrangham attributes this to two biologically distinct forms of aggression: reactive and proactive. The former comprises impulsive responses to threats, like a bar brawl sparked by escalating insults. The latter is planned violence with a clear goal, such as premeditated murder and war. Research suggests these forms of aggression are controlled by different brain regions, hormone pathways and genes — and therefore could be dialed up or down independently by distinct evolutionary pressures.

The book The 10,000 Year Explosion argued the same case:

Selection for submission to authority sounds unnervingly like domestication. In fact,there are parallels between the process of domestication in animals and the changes that have occurred in humans during the Holocene period. In both humans and domesticated animals, we see a reduction in brain size, broader skulls, changes in hair color or coat color, and smaller teeth. As Dmitri Belyaev’s experiment with foxes shows, some of the changes that are characteristic of domesticated animals may be side effects of selection for tameness.

As for humans, we know of a number of recent changes in genes involving serotonin metabolism in Europeans that may well influence personality, but we don’t know what effect those changes have had—since we don’t yet know whether they increase or decrease serotonin levels. Floppy ears are not seen in any human population (as far as we know), but then, changes in the external ear might interfere with recognition of speech sounds. Since speech is of great importance to fitness in humans, it may be that the negative effects of floppy ears have kept them from arising.

Some of these favored changes could be viewed as examples of neoteny—retention of childlike characteristics. Children routinely submit to their parents—at least in comparison to teenagers—and it’s possible that natural selection modified mechanisms active in children in ways that resulted in tamer human adults, just as the behaviors of adult dogs often seem relatively juvenile in comparison with adult wolf behavior.

If the strong governments made possible by agriculture essentially “tamed” people, one might expect members of groups with shallow or nonexistent agricultural experience to be less submissive, on average, than members of longtime agricultural cultures. One possible indicator of tameness is the ease with which people can be enslaved, and our reading of history suggests that some peoples with little or no evolutionary exposure to agriculture “would not endure the yoke,” as was said of Indians captured by the Puritans in the Pequot War of 1636. In the same vein, the typical Bushman, a classic hunter-gatherer, has been described as “the anarchist of South Africa.” pp. 112-113

It’s even written all over our faces: The history of humanity in your face (Science Daily)

Changes in the human face may not be due only to purely mechanical factors. The human face, after all, plays an important role in social interaction, emotion, and communication. Some of these changes may be driven, in part, by social context. Our ancestors were challenged by the environment and increasingly impacted by culture and social factors. Over time, the ability to form diverse facial expressions likely enhanced nonverbal communication.

Large, protruding brow ridges are typical of some extinct species of our own genus, Homo, like Homo erectus and the Neanderthals. What function did these structures play in adaptive changes in the face? The African great apes also have strong brow ridges, which researchers suggest help to communicate dominance or aggression. It is probably safe to conclude that similar social functions influenced the facial form of our ancestors and extinct relatives. Along with large, sharp canine teeth, large brow ridges were lost along the evolutionary road to our own species, perhaps as we evolved to become less aggressive and more cooperative in social contexts.

Another very exciting and important discovery: a Denisovan jawbone indicates that Denisovans (or a close ancestor) were the first inhabitants of the high-altitude Tibetan plateau. There they developed particular genetic adaptations to the altitude, and then much later passed these adaptations to the ancestors of modern humans living there today.

Our protein analysis shows that the Xiahe mandible belonged to a hominin population that was closely related to the Denisovans from Denisova Cave,” said co-author Frido Welker, from the University of Copenhagen, Denmark.

The discovery may explain why individuals studied at Denisova Cave had a gene variant known to protect against hypoxia (oxygen deficiency) at high altitudes. This had been a puzzle because the Siberian cave is located just 700m above sea level.

Present-day Sherpas, Tibetans and neighbouring populations have the same gene variant, which was probably acquired when Homo sapiens mixed with the Denisovans thousands of years ago. In fact, the gene variant appears to have undergone positive natural selection (which can result in mutations reaching high frequencies in populations because they confer an advantage).

“We can only speculate that living in this kind of environment, any mutation that was favourable to breathing an atmosphere impoverished in oxygen would be retained by natural selection,” said Prof Hublin.”And it’s a rather likely scenario to explain how this mutation made its way to present-day Tibetans.”

Denisovans: Primitive humans lived at high altitudes (BBC).

First hominins on the Tibetan Plateau were Denisovans (Science Daily)

I wonder: Could the legends of the Yeti be a memory of the ancient Denisovans? Now that’s really speculating!

Details of the history of inner Eurasia revealed (Science Daily). There are three major ecological regions, and three major peoples in this region of the world:

This vast area can also be divided into several distinct ecological regions that stretch in largely east-west bands across Inner Eurasia, consisting of the deserts at the southern edge of the region, the steppe in the central part, taiga forests further north, and tundra towards the Arctic region. The subsistence strategies used by indigenous groups in these regions largely correlate with the ecological zones, for example reindeer herding and hunting in the tundra region and nomadic pastoralism on the steppe.

They found three distinct genetic groupings, which geographically are arranged in east-west bands stretching across the region and correlating generally to ecological zones, where populations within each band share a distinct combination of ancestries in varying proportions.

The northernmost grouping, which they term “forest-tundra,” includes Russians, all Uralic language-speakers, which includes Hungarian, Finnish and Estonian, and Yeniseian-language speakers, of which only one remains today and is spoken in central Siberia. The middle grouping, which they term “steppe-forest,” includes Turkic- and Mongolic-speaking populations from the Volga and the region around the Altai and Sayan mountains, near to where Russia, China, Mongolia and Kazakhstan meet. The southernmost grouping, “southern-steppe,” includes the rest of Turkic- and Mongolic-speaking populations living further south, such as Kazakhs, Kyrgyzs and Uzbeks, as well as Indo-European-speaking Tajiks.

Because the study includes data from a broad time period, it is able to show shifts in ancestry in the past that reveal previously unknown interactions. For example, the researchers found that the southern-steppe populations had a larger genetic component from West and South Asia than the other two groupings. This component is also widespread in the ancient populations of the region since the second half of the first millennium BC, but not found in Central Kazakhstan in earlier periods. This hints at a population movement from the southern-steppe region to the steppe-forest region that was previously unknown…

Interestingly, this is also where the horse was first domesticated, although we don’t know exactly when or where as we saw above. Anyways, back to the first farmers:

The First Anatolian farmers were local hunter-gatherers that adopted agriculture (Science Daily)

Farming was developed approximately 11,000 years ago in the Fertile Crescent, a region that includes present-day Iraq, Syria, Israel, Lebanon, Egypt and Jordan as well as the fringes of southern Anatolia and western Iran. By about 8,300 BCE it had spread to central Anatolia, in present-day Turkey. These early Anatolian farmers subsequently migrated throughout Europe, bringing this new subsistence strategy and their genes. Today, the single largest component of the ancestry of modern-day Europeans comes from these Anatolian farmers. It has long been debated, however, whether farming was brought to Anatolia similarly by a group of migrating farmers from the Fertile Crescent, or whether the local hunter-gatherers of Anatolia adopted farming practices from their neighbors.

A new study by an international team of scientists led by the Max Planck Institute for the Science of Human History and in collaboration with scientists from the United Kingdom, Turkey and Israel, published in Nature Communications, confirms existing archaeological evidence that shows that Anatolian hunter-gatherers did indeed adopt farming themselves, and the later Anatolian farmers were direct descendants of a gene-pool that remained relatively stable for over 7,000 years.

They also built the world’s earliest temple (that we know of). We also know that they didn’t stay put. Anatolian farmers moved around the Mediterranean and into Iberia (Spain). From there, it appears they migrated northward to the British Isles, where they displaced the original hunter-gatherer populations. It is they who brought the tradition of megalithic stone building (and presumably feasting) to prehistoric Britain.

Researchers compared DNA extracted from Neolithic human remains found across Britain with that of people alive at the same time in Europe. The Neolithic inhabitants were descended from populations originating in Anatolia (modern Turkey) that moved to Iberia before heading north. They reached Britain in about 4,000BC.

The migration to Britain was just one part of a general, massive expansion of people out of Anatolia in 6,000BC that introduced farming to Europe.Before that, Europe was populated by small, travelling groups which hunted animals and gathered wild plants and shellfish.
One group of early farmers followed the river Danube up into Central Europe, but another group travelled west across the Mediterranean. DNA reveals that Neolithic Britons were largely descended from groups who took the Mediterranean route, either hugging the coast or hopping from island-to-island on boats. Some British groups had a minor amount of ancestry from groups that followed the Danube route.

When the researchers analysed the DNA of early British farmers, they found they most closely resembled Neolithic people from Iberia (modern Spain and Portugal). These Iberian farmers were descended from people who had journeyed across the Mediterranean. From Iberia, or somewhere close, the Mediterranean farmers travelled north through France. They might have entered Britain from the west, through Wales or south-west England…

In addition to farming, the Neolithic migrants to Britain appear to have introduced the tradition of building monuments using large stones known as megaliths. Stonehenge in Wiltshire was part of this tradition.

Although Britain was inhabited by groups of “western hunter-gatherers” when the farmers arrived in about 4,000 BC, DNA shows that the two groups did not mix very much at all. The British hunter-gatherers were almost completely replaced by the Neolithic farmers, apart from one group in western Scotland, where the Neolithic inhabitants had elevated local ancestry. This could have come down to the farmer groups simply having greater numbers…

Professor Thomas said the Neolithic farmers had probably had to adapt their practices to different climatic conditions as they moved across Europe. But by the time they reached Britain they were already “tooled up” and well-prepared for growing crops in a north-west European climate…

Stonehenge: DNA reveals origin of builders (BBC). Interesting that both Spain and Britain became the main centers of shepherding and wool production during the Middle Ages. Here’s an additional comment from Reddit r/history:

Someone working on aDNA here. The Mediterranean route is closely related to Cardium Pottery cultures. Radiocarbon dates suggest that there was a rapid spread about 5500 BCE – including into Sardinia, the South French and Iberian coasts; which has been interpreted as evidence for seafaring spread of agricultural societies. These early farmers slowly and progressively intermixed with surrounding Hunter Gatherers (which were as different from the Early farmers as present-day Chinese are to Europeans!), until these were completely absorbed. Before agriculture (and broadly the people who brought it) moved on to Britain, there was about a 1000 year break – why is kind of unknown.

From a historical period closer to our own time, the DNA of several Crusaders was examined and found to be fairly diverse. However, it appears that Europeans didn’t have much of a lasting imprint on the local populations in the Levant:

Archaeological evidence suggested that 25 individuals whose remains were found in a burial pit near a Crusader castle near Sidon, Lebanon, were warriors who died in battle in the 1200s. Based on that, Tyler-Smith, Haber, and their colleagues conducted genetic analyses of the remains and were able to sequence the DNA of nine Crusaders, revealing that three were Europeans, four were Near Easterners, and two individuals had mixed genetic ancestry.

Throughout history, other massive human migrations — like the movement of the Mongols through Asia under Genghis Khan and the arrival of colonial Iberians in South America — have fundamentally reshaped the genetic makeup of those regions. But the authors theorize that the Crusaders’ influence was likely shorter-lived because the Crusaders’ genetic traces are insignificant in people living in Lebanon today. “They made big efforts to expel them, and succeeded after a couple of hundred years,” says Tyler-Smith.

“Historical records are often very fragmentary and potentially very biased,” Tyler-Smith says. “But genetics gives us a complementary approach that can confirm some of the things that we read about in history and tell us about things that are not recorded in the historical records that we have. And as this approach is adopted by historians and archaeologists as a part of their field, I think it will only become more and more enriching.”

A history of the Crusades, as told by crusaders’ DNA (Science Daily)

Crusader armies were remarkably genetically diverse, study finds (Guardian)

Why didn’t a civilization emerge along the Mississippi river valley? (Ask Reddit) Of course, it did: the Mississippian culture. The real question is, why was it not the most advanced New World civilization, rather than the cultures which appeared farther south in the Yucatan and the Andes mountains? One answer might be the frequent and terrible flooding of the river valley:

The Mississippi changes course quite often, which would break down any long term settlements. Wunderground, when it was good, had a wonderful article on this, and how it’s desperate to change course now, but that would destroy the trade routes etc., along the river, and make New Orleans a ghost town, which shows that what happened in the past, can happen again to our civilization.

That’s true of the Yellow River in China (“China’s Sorrow”), and in fact, containing the Yellow River may have been a spur for civilizational development in China. But its situation was different:

That is actually true of the Yellow River too. There were over 1,000 recorded floods, 18 documented major course changes, and the river made giant swamps/lakes that came and went, and just a lot of shenanigan with that river. I think what helped was that it is long enough and the civilization originated around its upper reaches rather than its incredibly problematic lower reaches, and by the time stable populations started living in the the worst flood plains, the population and technology were enough to prevent/mitigate most floods and handle a few really bad ones (i.e. losing a million or two is bad, but won’t be civilization ending)…[a] combination of where civilization developed, the technology to modify the river’s flow, and population is what saved China.

How differently did Eastern and Western Roman Empires cope and deal with the Barbarians? (Reddit history) Top comment:

The Eastern and Roman Empires weren’t separate entities as such at this point: Theodosius later ruled over both. I’m not sure there’s an issue of ‘learning from’ the experience differently but rather different underlying conditions. A huge amount of ink has been spilt on why the West fell (and the East didn’t) but I think some likely elements

– The Western Empire had the less wealthy provinces. Money was vital both for paying armies and for paying off barbarians: later on, the East paid barbarians to go away who went to the West instead…

– The Western provinces simply had more of a vulnerable extended border with barbarian tribes than the East. The East had to deal with Sassanians but they were a single enemy who could be negotiated with, and there was relative peace in the 5th century. Until the Arab conquests the richest provinces were harder to reach for enemies while being well-connected for friends by the Mediterranean. The Hellespont was a natural barrier for easy passage from Europe into Asia.

– The West had more usurpers and less stable continuity of power. As Emperors tended (probably rightly) to see usurpers as more a threat than barbarians, civil wars tended to sap ability to stop barbarians.

– I’m less sure of this one as a cause of the problems, but some attribute the West’s problems to the fact its emperors were more often dominated by military strongmen (weak emperors in the East being usually dominated by civilians). However, you can equally argue those strongmen helped stave off the fall!
In terms of surviving a thousand years, the Eastern empire was reduced to something of a rump state by the Arab conquests (Peter Heather says it became a ‘satellite state’ of the Caliphate, with its ability to act dependent on the rise and falls of their strength rather than vice versa). While the East saw times of regaining strength, by 1453 it was more a city-state than an Empire and successor states in the West had been stronger for some time, albeit without the same institutional continuity.

The Feasting Theory rides again! The secret to a stable society? A steady supply of beer doesn’t hurt (Science Daily)

A thousand years ago, the Wari empire stretched across Peru. At its height, it covered an area the size of the Eastern seaboard of the US from New York City to Jacksonville. It lasted for 500 years, from 600 to 1100 AD, before eventually giving rise to the Inca. That’s a long time for an empire to remain intact, and archaeologists are studying remnants of the Wari culture to see what kept it ticking. A new study found an important factor that might have helped: a steady supply of beer…

Nearly twenty years ago, Williams, Nash, and their team discovered an ancient Wari brewery in Cerro Baúl in the mountains of southern Peru. “It was like a microbrewery in some respects. It was a production house, but the brewhouses and taverns would have been right next door,” explains Williams. And since the beer they brewed, a light, sour beverage called chicha, was only good for about a week after being made, it wasn’t shipped offsite — people had to come to festivals at Cerro Baúl to drink it. These festivals were important to Wari society — between one and two hundred local political elites would attend, and they would drink chicha from three-foot-tall ceramic vessels decorated to look like Wari gods and leaders. “People would have come into this site, in these festive moments, in order to recreate and reaffirm their affiliation with these Wari lords and maybe bring tribute and pledge loyalty to the Wari state,” says Williams. In short, beer helped keep the empire together…

By looking at the chemical makeup of traces of beer left in the vessels and at the chemical makeup of the clay vessels themselves, the team found two important things. One, the vessels were made of clay that came from nearby, and two, the beer was made of pepper berries, an ingredient that can grow even during a drought. Both these things would help make for a steady beer supply — even if a drought made it hard to grow other chicha ingredients like corn, or if changes in trade made it hard to get clay from far away, vessels of pepper berry chicha would still be readily available.

The authors of the study argue that this steady supply of beer could have helped keep Wari society stable. The Wari empire was huge and made up of different groups of people from all over Peru. “We think these institutions of brewing and then serving the beer really formed a unity among these populations, it kept people together,” says Williams.

The study’s implications about how shared identity and cultural practices help to stabilize societies are increasingly relevant today. “This research is important because it helps us understand how institutions create the binds that tie together people from very diverse constituencies and very different backgrounds,” says Williams. “Without them, large political entities begin to fragment and break up into much smaller things. Brexit is an example of this fragmentation in the European Union today. We need to understand the social constructs that underpin these unifying features if we want to be able to maintain political unity in society.”

I don’t know about that last part, we have more microbreweries than ever in the United States, yet we haven’t been more divided since the Civil War :(.

The microbrewery came to a dramatic end, however:

Then, after the brewery had run for hundreds of years, it hosted one final blowout bash. The artifacts at the site, the researchers believe, are a snapshot of what it looked like during its final hours.

Based on the positions of the artifacts, as well as ash and other sediments, it appears that this last festival ended with the Wari intentionally burning down the brewery. Festival-goers smashed their cups onto the smoldering ashes; the fermentation jars were toppled, the pieces strewn about the brewery; and seven necklaces of shell and semi-precious stones were ceremoniously laid on the ruins. The whole thing was covered with sediment.

Why they did this is unclear. But one thing’s for sure: It was a clear signal that the brewery was now closed.

If you’re curious to recreate the bonding experience of the ancient Wari, don’t despair! You may get your opportunity:

The Field Museum had already partnered with Off Color Brewing in Chicago to make a dino-themed brew called Tooth and Claw. And when the Field Museum’s marketing team got wind of the Wari research, they wondered if Off Color might be interested in making a chicha, too.

There were a few roadblocks. The Wari recipe makes a brew that only keeps for five days. That’s slightly problematic in the modern beverage industry. Plus, to legally call a drink a “beer,” certain recipe standards have to be met — fermented corn and pepper berries don’t quite cut it.

But the group persevered. Instead of replicating the exact recipe, they decided, they’d just work to replicate the flavor in the form of a modern ale. Off Color obtained flavor-packing ingredients from the source: purple maize and pink peppercorns from Peru.
The brewers went through multiple iterations with the archaeologists until they got the taste just right. The beer first came out in 2016 and was so popular Off Color is re-releasing it this June.

The Field Museum helped supply Discover with a six-pack of Off Color’s Wari Ale. And, for my personal tastes, the pepper-berry-chicha mimic is delicious. It’s a little sour but doesn’t make you pucker; a little fruity but not too sweet. It’s light and refreshing, and a fabulous shade of purple.

A Brewery in Peru Ran For Centuries, Then Burned After One Epic Ancient Party. (Discover Magazine). I may have to take a field trip to Chicago to try this out.

Was There a Civilization On Earth Before Humans? (The Atlantic)

We’re used to imagining extinct civilizations in terms of the sunken statues and subterranean ruins. These kinds of artifacts of previous societies are fine if you’re only interested in timescales of a few thousands of years. But once you roll the clock back to tens of millions or hundreds of millions of years, things get more complicated.

When it comes to direct evidence of an industrial civilization—things like cities, factories, and roads—the geologic record doesn’t go back past what’s called the Quaternary period 2.6 million years ago. For example, the oldest large-scale stretch of ancient surface lies in the Negev Desert. It’s “just” 1.8 million years old—older surfaces are mostly visible in cross section via something like a cliff face or rock cuts. Go back much farther than the Quaternary and everything has been turned over and crushed to dust.

The authors don’t believe that there was, but they use it as a thought experiment to document the long-term effects of human industrial civilization. Perhaps this hypothetical ancient civilization was George R.R. Martin’s Westeros? They do seem to live with extinct megafauna.

Imagine if geography was maximized to spread cultural developments around the world. That’s the idea this guy had: Welcome to Jaredia (World Dream Bank). Diamond has a brand new book out: Upheaval by Jared Diamond review – how nations cope with crisis (Guardian)

The Ancient Earth globe.

Did a cardiac arrhythmia influence Beethoven’s music? (Science Daily)

The story of handwriting in 12 objects (BBC)

Why India (and only India) is called a “subcontinent” (TYWKIWDBI)

Associating color with sounds is not just for synesthetes, and the pairing appear to be universal. (Science Daily)

‘Spectacular’ ancient public library discovered in Germany (Guardian)

Are dingoes just feral dogs? (Anthropocene)

Evidence for a giant flood in the central Mediterranean Sea (Phys.org)

A lot of people have speculated as to when capitalism first got started. Now, I think we may finally have an answer. The way I see it, capitalism truly began when we started to rip each other off: Prehistoric Traders Cheated Rich People With Fake Amber Jewelry (Discover Magazine):

The researchers know the beads are fakes, but are still working out why they exist. Odriozola and colleagues propose three possible explanations. The first plays on the fact that amber is rare. It’s possible a shortage of real amber inspired the creation of imitations. Alternatively, the production of a low-cost product that serves the same social function as amber for members of society who could not afford the real gems is plausible. But the third possibility appears the most likely: Traders who could not acquire the valuable and rare items developed counterfeits to sell as the real thing and cheat their clients.

The researchers say this last option might have been the case in Cova del Gegant, where the four resin-covered beads were found alongside two genuine amber beads that are nearly identical in size and shape. Visually, the authentic beads and the counterfeits look exactly the same.

“The quest for power and wealth are conspicuous behaviors in humankind that would fit perfectly on middlemen cheating wealthy people to acquire wealth and power,” Odriozola said. Plus, Odriozola says, if the tradesmen fooled him and his colleagues, who are well-trained archaeologists, with the fake amber, then it’s almost certain they also pulled one over on the wealthy community members.

And thus, capitalism was born, ha!

The Capitalism vs. Socialism Debate

I’ve got just a few more thoughts about whole the Communism versus Socialism debate. Not about the Peterson/Zizek debate specifically, but about the Capitalism/Socialism debate more generally.

The first is that it’s a lot easier to criticize capitalism than it is to defend socialism.

This is obvious. We’ve seen that even the most ardent defenders of capitalism fully acknowledge many of its shortcomings, as Peterson does in the debate. They are fully aware that there are many problems with capitalism, and sometimes even serious problems with it. Anyone who denies this would look like an idiot.

So that’s a good place to start.

Rather than immediately set up a false dichotomy, why not point out the shortcomings of the current system and go from there?

So many debates assume, implicitly or not, that there is some sort of “-ism” that we can simply plug in and replace the existing system root-and-branch.

I think it’s plain to see that there is no such “-ism.” And so therefore many people just give up and accept the status quo, or make sweeping, facile statements about “overthrowing capitalism” or some such, without any real idea of what they’re talking about. Such people are easily dismissed.

And yet, the current problems with capitalism are not so easily dismissed.

The problem I see is that many of the specific, targeted solutions to specific problems under our current form of capitalism are dismissed and waved away as socialism, as if that was somehow an argument.

That’s the problem.

Stick to the issue not the -ism.

Rather than failings, I think we should focus on how much socialist ideas are responsible for the the prosperity we enjoy under our brand of so-called capitalism.

Whether that’s worker protection laws (currently being gutted), advanced technological infrastructure, or government subsidies keeping “free” market prices reasonably stable, many of the concepts and practices that make current Industrial societies as wealthy and prosperous as they are are as far away from doctrinaire “Classical British Liberalism” as can be.

A lot of what Marx was writing about doesn’t even exist today. And besides, it’s not like Classical British liberalism didn’t kill anybody. The people of Ireland in the 1840’s might have something to say about that. As a percentage of the population, we’re talking about deaths that are on par with Stalin’s crimes. Yet it somehow doesn’t count because it happened earlier?

From my understanding, what Marx was saying was that the inherent contradictions of capitalism would eventually cause the system to undermine itself, making it less and less viable over time. And from where I sit, this prophecy seems to be coming true.

He wasn’t saying that capitalism was worse than feudalism, or that mass-production of commodities didn’t confer benefits to a lot of people. Rather, he argued that capitalism wasn’t an end-stage of human social organization, but a necessary transitory one that we needed to pass through. It *had* to be transitory, for several reasons.

The simplest and most basic one is that nothing grows forever. Capitalism, as currently constituted, requires ever-increasing production, ever-growing surplus, and ever-higher profits. Like an airplane or bicycle, its forward momentum is the only thing that keeps it stable and upright over time. But the idea that you can constantly produce more and more every single year implies that needs and wants—and more alarmingly the biosphere itself—are infinite.

So that’s one contradiction.

Related to this is the fact that capitalism requires scarcity even while producing abundance. The commodities that capitalists sell need to be reasonably scarce, or they will not command a sufficiently high price (i.e. exchange value) in markets to justify their production, and that is what capitalists care about rather than actual use value. And so, you need to keep even abundant goods artificially scarce. You also need to keep people persistently dissatisfied with what they already own so to that they will keep purchasing “new and improved” items—hence the “organized creation of dissatisfaction” that the early advertisers (honestly) claimed was their reason for existing.

Simply put, too much prosperity is bad for business.

And one that has come into very sharp relief today is the fact that capitalism relentlessly drives towards more efficiency, but such efficiency necessarily reduces the amount of total labor that needs doing. Yet everyone is required to sell their labor as a basic condition of survival!

This has recently been brought into sharp relief with recent developments in automation and AI, but it’s been a serious problem for a long time. It’s a problem all over the world today where capitalism has displaced more traditional arrangements not predicated on wage-earning and constant, never-ending growth.

Clearly there is, in fact, a “lump of labor,” at least at any single point in time, otherwise unemployment would never have existed throughout history! Otherwise, how do we explain things like the Luddite Revolt and the Captain Swing riots (just to mention two of them). And, even if the “jobs we can’t even imagine” do eventually manifest themselves, what are displaced workers supposed to do in the meantime under a “pure” capitalist (i.e. non-socialist) system?

It was basic contradictions like these that Marx could see by taking an unflinching look at the system that had developed out of earlier forms of economic organization. He felt that the capitalism of his time could not continue. And, to some extent, these developments have already undermined the kind of imaginary libertarian capitalism taught in economics textbooks, but that exists nowhere in the real world outside of them.

I mean, big business and corporate bosses are the frequent recipients of all sort of largesse from governments that we might consider to be socialism (or “corporate welfare”). Are they really going to then turn around and argue that socialism has been a total failure? Has it failed for them?

Why it it only a “failure” when it improves the lives of the average citizen?

Instead of arguing for Marxism (whatever that means), why not persistently argue for all the ways that socialism has worked all around the world, and continues to work? Rather than constantly running from the Black Book of Communism, how about talking about how many lives socialism has actually saved through initiatives like universal health care (where it exists), public assistance, worker housing, and the like. I mean, a hell of a lot of the higher living standards we enjoy under capitalism are not, strictly speaking, due to doctrinaire capitalism.

As an aside, the very first thing I ever heard Jordan Peterson say (it was on the Rogan podcast—I had no idea who he was at the time) was to belittle college students for daring to criticize capitalism on their iPhones. So you might say I was predisposed not to think of him as any sort of “deep thinker” from the very start.

This argument is so tired and cliched that it has its very own cartoon:

Not to mention that almost everything in the iPhone was created through publicly-funded government investment and research. And yet, the public now has to buy back their own investment from the richest company on earth, one that sits on piles of cash it doesn’t even know what to do with (while simultaneously being told by politicians in both parties that our government is “broke”). The public can’t even afford to go to sports games in the countless sports arenas and stadiums that they (we) pay for!


Why don’t we talk about that? I wonder, is socialism really such a failure after all?

Finally, to criticize social scientists for being secret “Marxists” not only smacks heavily of McCarthyism, but is also like criticizing biologists for being Darwinians. See the bonus below for why. What does that trope exactly mean, anyway?

Anyway, those are just some random thoughts…

BONUS: I thought this video by someone calling himself the “Finnish Bolshevik (!)” made some good points regarding the whole “Capitalism is the only system aligned with basic human nature” argument:

[Peterson] repeated over and over again that, ‘yeah, there [are] problems, but there’s nothing that can be better.’ He said that climate change is not as big a problem as many people think it is. There’s always going to be inequality but we’re trying to deal with it to the best of our ability…that’s the typical moderate position that, ‘Yeah, it’s not good, but this is the best we can do. Stop talking about any real change…’

Basically, Peterson makes these tired old anti-Communist arguments, extremely cliched; what we’ve heard a million times before: human nature, Communism has killed millions, Communism has never worked, calculation problem, it’s all there…

Then, Peterson basically does the whole ‘human nature’ argument. He says that humans are naturally different, hence you have this natural hierarchy, therefore Communism supposedly goes against human nature and is impossible, and Capitalism supposedly corresponds to human nature. But once again, he doesn’t know about Marx. He doesn’t know that Marx talks about natural differences in people, and that it’s not a problem for Communism. And it’s just a foolish idea to think that capitalism corresponds to human nature, as if you let a feral child loose into the wild for a year, and then he’s going to build Wal-Mart.

No, Capitalism does not exist in what Rousseau would call a “State of Nature.” A hunter-gatherer society exists in nature. Capitalism exists in civilization; in society. In a state of nature, we had a hunter-gatherer society—what Marx called “primitive communism,” because that was a society where everybody worked, where there was no exploiter class, where there were no different classes, there were no means of production, there was no property—that is how humans lived for 250,000 years! That is how humans lived in a state of nature.

Then technology developed. We built civilization; we built society. And then, instead of just having biological evolution, we started to have cultural evolution, societal evolution, technological evolution, what have you. And then we got these different forms of societies—we got slave society, feudal society, capitalist society, socialist society. Capitalism doesn’t naturally grow out of a human’s biology. It emerged as a result of previous civilizations—previous social, cultural and economic development.

And that’s why, if you put a human in a a hunter-gatherer society, he’s going to act a whole lot different than in a slave society. A hunter-gatherer is going to think that it’s perfectly natural that land is not owned by anybody. He’s going to have a ‘naturally’ communist concept when it comes to land ownership. Somebody in a slave society is going to think that it’s perfectly natural that we have slaves. Every society has always said that ‘this is the way it’s supposed to be—this is what human nature is.’ But human nature is malleable, and it greatly varies based on what kind of economic system you’re in.

And that’s the basic Marxist argument. That it really is the class struggle that determines how we act. It defines how our entire society is structured. Different classes = entirely different society. Slaves: slave society. Workers: Capitalist society. Hunter-gatherers: primitive communist society…

Thoughts on the Peterson/Zizek Debate

Because I’m a glutton for punishment, I thought I’d proffer a few observations on the Zizek/Peterson debate, which I’ve finally had an opportunity to watch.

My first thought was the sheer bizzareness of the basic premise for the debate: Capitalism versus Marxism, which is better for happiness? Er, how about none of the above? As Peter Joseph opined:

“The title of this debate is ‘Happiness: Capitalism versus Marxism.’ An unfortunate decision because it sets up a binary position between assumed ideologies, while throwing in the world “happiness,” which muddies the issue even more, since what defines happiness is sociologically vague when it comes to causality…and yet, people are going to watch this, especially young people, and this is going to be their limit of debate. This is going to be how they’re going to frame their sense of possibility in terms of future social organization…”

I think a better topic might be, rather, what kind of capitalism do we want to have? Indeed, I think that’s the conversation we should be having. After all, most everywhere is basically now capitalist to one degree or another (with the exception of maybe, oddballs like North Korea).

And that would be a far easier question to answer. If I a participant in that debate, I would make the following points:

The first is that every indicator of overall happiness, well-being and life satisfaction that we know of for as long as I can remember puts the Nordic countries—especially Scandinavia—on top. Finland, Norway and Denmark are the perennial leaders in happiness rankings every single time, closely followed by other Scandinavian countries such as Sweden and Iceland, and then followed closely by other Nordic countries like Switzerland, Austria, Germany, France and the Netherlands. Occasionally a few Asian and Anglo-Saxon countries with good showings in the ratings, countries like Japan, Australia, and Peterson’s own native Canada. The United States is almost always lower than these democratic socialist countries, but higher than desperately poor failed states found in Africa, Asia, and the Middle East, which is a rather sad point of comparison. For example:

Happiness report: Finland is world’s ‘happiest country’ – UN (BBC)

Finland is offering free trips to people in need of happiness lessons (Treehugger)

So the empirical evidence here is pretty overwhelming and unequivocal. If happiness is your goal—which was one of the core premise of the debate—then the Nordic countries can’t be beat. And in my view, much of what constitutes the so-called “radical Left” in America today is merely advocating a move in that direction, and not for any kind of revolutionary socialism or Communism. This is evident from the policy proposals of the most notable Left-wing politicians in America today: Bernie Sanders, Elizabeth Warren, AOC, and their ideological allies.

The second point is a quibble with Peterson’s assertion (paraphrasing Churchill) that capitalism is the worst economic system except for all the others that have ever been tried. Here’s Peterson:

“I heard a criticism of capitalism, but no real support of Marxism, and that’s an interesting thing…Zizek points out that there are problems with capitalism. I would like to say that I am perfectly aware that there are problems with capitalism. I wasn’t defending capitalism actually…I was defending it in comparison to Communism, which is not the same thing. Because, as Winston Churchill said about democracy, it’s the worst form of government there is, except for all the other forms.”

“And so you might say the same thing about capitalism–that its the worst form of economic arrangement you could possibly manage except for every other one that we’ve ever tried. And I’m dead serious about that; I’m not trying to be flippant…”

Except he’s wrong about that. We *have* tried another form of capitalism. It goes under a variety of names, but it was, in essence, the type of managed capitalism we had from the end of the Second World War up until the mid-1970s or so.

After 1980, we embraced a new “software” running on the “hardware” of Capitalism—Neoliberalism, initially put in place by Thatcher and Reagan in the Anglo-Saxon countries, and then spreading to the rest of the industrialized world to one degree or another. Under the “Washington Consensus,” it became the dominant model for the developing world as well after 1980. It continues to try and usurp all other competing forms of capitalism, including that of Northern Europe, often under the guise of “disaster capitalism” caused by fiscal crisis, political crisis, or deliberately imposed austerity policies (c.f. Naomi Klein’s “Shock Doctrine”).

The tenets of this new philosophy of capitalism can be summarized as follows:

  • Globalization. That is, unrestricted capital flows between nations. Western workers would now be in direct head-to-head competition with the billions of increasingly-educated workers all over the globe. Often referred to as a “race to the bottom.” Corporations and businesses are unmoored from any particular state, as if floating on a barge, free to go anywhere to seek the highest returns anywhere on earth.
  • Deregulation (really re-regulation) of corporations and financial institutions, trusting that markets will be “self-regulating.” and that somehow self-interested parties left to their own devices will keep the system stable.
  • Financialization, or the leveraging of money and debt, along with gambling, to increase wealth as opposed to investing in productive enterprise. This has led to things like leveraged buyouts and asset-stripping on a massive scale. Sometimes referred to as “casino capitalism.” Much wealth has been reallocated rather than created due to this process. It’s also led to repeated bailouts of the financial system, often referred to as “privatized profits and socialized losses.”
  • Putting the notion of “shareholder value” front-and-center, ahead of all other business concerns—such as those toward employees, contractors, suppliers, the environment, and the wider society in general—per Milton Friedman’s doctrine. Focusing on immediate, short-term returns instead of long-term stability, sometimes called “quarterly capitalism.” This is exacerbated by compensating CEO’s with stock options instead of salaries, which has also dramatically increased inequality.
  • Austerity—the idea that the government must always run balanced budgets and pay down excess “debt” even in times of financial crisis and instability. This has led to deep, recurring cutbacks in government-provided social services all across the developed world, especially in the Eurozone with its common currency system. More generally, it promotes the idea that governments are perennially “broke” and cannot–and should not–provide for basic social welfare provisioning.
  • Drastically reduced taxes on wealth, with correspondingly high government budget deficits. Passive investment income like capital gains and dividends are taxed at a much lower rate than salary and wage income. Government relies more and more on regressive taxation like FICA and VAT rather than steeply progressive taxes, as it once did.
  • Privatization of government services and a drastically reduced role for government as opposed to the private sector. The idea is that citizens are best served by becoming “consumers” shopping around in “free” markets using their own resources, rather than relying on collectively-provisioned government services, which are derided as “inefficient” and “wasteful” despite all the evidence to the contrary.
  • A much more “flexible” labor market, and a concomitant hostility to labor unions. A general reduction in employer obligations to workers (but not the reverse), and the rise of abusive labor arrangements such as the “gig economy” and “zero-hours contracts.”
  • A commodification of all aspects of life. Things that used to be provided for free are turned into commodities to be bought and sold (“Markets in Everything”)–even things like childcare, food preparation, advice, and companionship.
  • Massive consolidation of entire sectors of the economy in the hands of just a few participants, often referred to as “monopoly capitalism.” More precisely, many market sectors have become oligopolies, with only a handful of big companies exerting titanic influence over entire sectors, such as wholesale agriculture and Internet provision (Comcast, Time-Warner, etc.) In the online world, companies like Google, Facebook, eBay, Amazon and Instagram dominate with no real competition. Mergers also serve to concentrate wealth and eliminate competition, and they are completely legal. While not a part of Neoliberal economic doctrine pe se, it has been the effective end result, especially in the United States where monopoly regulations are no longer enforced, even those that are already on the books. Much competition in the business world is more imagined than real.
  • Mass surveillance of the citizenry. While also not a doctrinaire aspect of Neoliberalism, it has been the result all over the world, most disturbingly in the world’s largest and fastest-growing economy, China. I would argue that the need to aggressively police markets (such as IP protection), and dealing with (i.e. jailing) the inevitable portion of the populace that cannot meet their needs through market forces alone, requires such authoritarian surveillance and draconian legal structures.

The list goes on and on, but I think you get the idea.

In the late 1930s a group of intellectuals, including Hayek, Ludwig von Mises, and others adopted the term “neoliberalism” to describe their agenda based on the conviction that laissez faire was not enough. The Great Depression paired with the rise of mass democracy meant that the market would not take care of itself. Wielding their ballots, electorates would always vote for more favors for themselves — and, thus, more state intervention into the economy — crippling the combination of market prices and private property upon which capitalism depended. From this time onward, as I describe in my recent book, one of the primary dreams of neoliberals was for institutions that would constrain democratic demands and protect the free movement of capital, goods, and (sometimes, but not always) people across borders.

Neoliberalism’s Populist Bastards (Public Seminar)

Even economic growth and GDP—supposedly the end-all and be-all of capitalism—was actually higher for much of that earlier period compared to the subsequent Neoliberal period, especially after the year 2000. Of course, much of that growth came from rebuilding after the Second World War, but that just shows that rapid economic growth is often more dependent on external forces than the vicissitudes of economic regulation (such as tax rates or trade). After the global financial crisis in 2008, growth outside of developing countries has almost completely stalled, causing problems all over the world, including the rise of reactionary populism.

The neoliberal assumption is that no one deserves anything, and everyone should have to do mortal combat for everything. That is, no one deserves healthcare, an education, an income, retirement?—?these things only belong to the “winners” of a never-ending social contest, in which the stakes are life or death. So it’s not exactly a surprise that neoliberalism set fire to the world. That the Champs Elysees is in flames, that Britain melted down, that American life simply fell apart. The fundamental idea was always going to fail: to make everyone fight everyone else for everything all the time?

What a 21st Century Politics Looks Like (Medium)

So, once again getting back to the original premise of the debate, which type of capitalism leads to greater happiness? If I were setting the terms of the debate,I would personally want to argue not so much for Marxism (whatever that means) as against Neoliberalism. After all, let’s face it, Marxism is still underground and marginalized, while Neoliberalism is, by far, the most forceful and dominant economic orthodoxy in the world today. There’s just no comparison.

Had I been a participant in the debate, I would have waved around copies of the Case/Deaton report, which extensively documented a dramatic increase in “deaths of despair” after the 2008 financial meltdown. Can there be any more damning an indictment of Neoliberalism’s effect on happiness, especially in the contemporary United States? Here’s an interview with Deaton from 2017:

“[I]f you look at white, non-Hispanics in midlife, in their early 50s for example, their mortality rate after 100 years of declining had turned the wrong way or at least flattened out. This is not happening to other groups in the U.S. It’s not happening to Hispanics. It’s not happening to African-Americans. And it’s not happening in any other rich country in the world. This is happening to both men and women. Perhaps the most shocking thing is that a lot of the deaths come from what you might think of as behavioral factors, which are alcohol – alcoholic beverages – from suicides and from drug overdoses. Many of those drug overdoses are accidental overdoses from prescription drugs. People often think the health system is responsible for our health. In this case, the health system is responsible for killing people, not actually helping them. … It’s like there are two Americas out there: the people with a B.A., and people without a B.A. The mortality rates of white non-Hispanics without a B.A. are going up faster than the average. They’re much more subject to opioid abuse, suicides, alcohol-related liver disease and heart disease, which has been a major cause in mortality decline. Mortality from heart diseases stopped declining and started rising. There’s a lot of really bad stuff going on, especially for this group without a B.A.”

Interview with Angus Deaton on Death Rates, Inequality, and More (The Conversable Economist)

Or, take the report by Phillip Alston, the UN Special Rapporteur on human rights who visited the United States a couple of years back:

My visit coincides with a dramatic change of direction in US policies relating to inequality and extreme poverty. The proposed tax reform package stakes out America’s bid to become the most unequal society in the world, and will greatly increase the already high levels of wealth and income inequality between the richest 1% and the poorest 50% of Americans. The dramatic cuts in welfare, foreshadowed by the President and Speaker Ryan, and already beginning to be implemented by the administration, will essentially shred crucial dimensions of a safety net that is already full of holes. It is against this background that my report is presented.

The United States is one of the world’s richest, most powerful and technologically innovative countries; but neither its wealth nor its power nor its technology is being harnessed to address the situation in which 40 million people continue to live in poverty…

American exceptionalism was a constant theme in my conversations. But instead of realizing its founders’ admirable commitments, today’s United States has proved itself to be exceptional in far more problematic ways that are shockingly at odds with its immense wealth and its founding commitment to human rights. As a result, contrasts between private wealth and public squalor abound.

Statement on Visit to the USA, by Professor Philip Alston, United Nations Special Rapporteur on extreme poverty and human rights (UN)

Which is immediately followed by a long list of damning statistics indicating a dramatic social breakdown inside the United States over the past few decades, unique among the world’s rich, industrialized nations, especially after 2008 (over ten years ago). I would have waved those statistics all over the place as well.

In 1918, a pandemic of Spanish flu infected approximately one third of the global population, killing between 20 and 50 million people. In the United States alone, more than 650,000 people died, enough to contribute to a decline in the country’s life expectancy. For a century, this was the worst decline in American health. Until this year. The National Center for Health Statistics reported that, between 2016 and 2017, US life expectancy dropped from 78.7 to 78.6 years. This marks the third consecutive year that life expectancy in the US has decreased.

We have not had a drop like this since the 1918 flu pandemic. What does our lack of attention tell us about how we think about health in this country?

…perhaps we have come to accept the longer-term trend in which US life expectancy has lagged relative to other economically comparable countries. Perhaps knowing that our health is not terrific is simply the American condition. But, of course, it is not and our health was not always worse than our peer countries. As recently as thirty years ago we were in the top half of the pack…Shouldn’t we then start paying attention to the worst American health deterioration in a 100 years?

The Story We Are Not Talking About Enough (Public Health Post)

What makes people unhappy? Stress about money. Precariousness. Poor physical health. A bad diet. Homelessness. Unemployment. “Social distance.” Random violence. Abuse. Johann Hari recently wrote about these factors in his book about the causes of depression, and depression is the biggest drain on happiness that there is. Yet these are the inevitable fruits of Neoliberalism. Yet there are alternatives, and yes, alternatives that have been tried, do exist right now, and do not require us to abandon markets or capitalism, and certainly do not require us to all become politically-correct Marxists. So I wish we had less of a binary debate about “Capitalism versus Marxism” and more about the type of capitalism we want to have, because there are versions that have been proven to work well and versions that don’t.

What I think were Peterson’s best comments came in his first 10-minute response. It’s also a good summary of his philosophy. He made some good points here, but also a few things to quibble with:

“He [Zizek] said, well, what [are] the problems with capitalism? Well, the commodification of cultural life–all life–fair enough. There’s something that isn’t exactly right about reducing everything to economic competition. Capitalism certainly pushes in that direction; advertising culture pushes in that direction; sales and marketing culture pushes in that direction. And there’s reasons for that. I have a certain amount of admiration for the necessity of advertisers and salesmen and marketers. But that doesn’t mean that the transformation of all elements of life into commodities in a capitalist sense is the best way forward. I don’t think it is the best way forward…”

Good! It seems Peterson might be receptive to the ides of Karl Polanyi. In fact, Polanyi, too, rejected Marx’s ideas of class conflict and the arbitrary division into capitalists and proletariat as needlessly simplistic. So there’s some agreement there. That brings to mind another very important point that Peterson himself brought up, and I think is one of the most important things he said all evening:

“There is, by the way, a relationship between wealth and happiness. It’s quite well defined in the psychological literature. Now it’s not exactly obvious whether the happiness measures are measures of happiness, or whether they’re measures of the absence of misery. And my sense is, as a psychometrician who’s looked at these scales, that people are more concerned with not being miserable than they are with being happy. Those are actually seperate emotional states mediated by different psychobiological systems. It’s a technical point, but its an important one.”

Yep, I agree with that, and I think it’s important.

“There is a relation between absolute level of income and self reported lack of misery or happiness. And its pretty linear until you hit, I would say, something approximating decent working class income. So what seems to happen is that wealth makes you happy as long as it keeps the bill collectors at bay. One you’ve got to the point where the misery is staved off as much as it can be by the fact that you’re not in absolutely economically dire straights, than adding more money to your life has no relationship whatsoever to your well being.

“And so, its clear that past a certain minimal point, additional material provision is not sufficient to, let’s say, redeem us individually or socially. And it’s certainly the case that the radical wealth production that characterizes capitalism might produce a fatal threat to the structure of our social systems and our broader ecosystems. Who knows?…”

What makes these comments so extraordinary is that, to me, they are excellent arguments not for libertarian winner-take-all capitalism but rather for democratic socialism! Let the capitalists make their fortunes, sure. But tax away the highest fortunes, and the happiness of the rich will not be negatively effected at all, since what we really want is not wealth but status, and status is inherently relative. Then use that wealth created by the capitalists to subsequently provide a basic social provisioning to all of your citizens, including those excluded from market participation for one reason or another. Provide the basics of survival (food, housing, education) so that people have a basic sense of security. After that, it’s up to them. That’s a recipe for societal happiness.

Is that Marxism? I don’t know, but it seems like a fair compromise to me. Good on Peterson for making these points. His next points are a bit more problematic, however:

“I didn’t hear an alternative, really, from Dr. Zizek. Now, he admitted that the rise to success of the Chinese was in part a consequence of the allowance of market forces and decried the authoritarian tendencies. And fair enough, that’s exactly it. And it also seemed to me that the social justice group identity processes that Dr. Zizek was decrying are to me a logical derivation from the oppression narrative that’s a fundamental presupposition of Marxism. I never heard a defense of Marxism in that part of his argument as well. And so, for me again, it’s to ask what is the alternative?”

“I also heard an argument for egalitarianism, but I heard it defined as equality of opportunity, not as equality of outcome, which I see as a clearly defined Marxist aim. I heard an argument for a modified social distribution of wealth, but that’s already part and parcel of most free market economic states with a wide variation, and an appropriate variation of government intervention, all of which constitute their own experiment. We don’t know how much social intervention is necessary to flatten the tendency of hierarchies to tilt it so terribly that all of the people at the top have everything and all the people at the bottom have nothing. It’s a very difficult battle to fight against that profound tendency. It’s much deeper than capitalism itself, and we don’t know what to do about it, so we run experiments. And that seems to be working reasonably as far as I can tell…”

Here, the thing is, Peterson is debating his own imaginary form of Marxism which has no relation to the real thing! In fact, at the very start of the debate, he admits to reading the Communist Manifesto (and only the Communist Manifesto) for the first time in something like forty years. And yet, all this time he has been going all around the world denouncing Marxism!

I don’t think Marx and Engels ever advocated for absolute equality of outcome or total leveling. So why, then, does Peterson constantly—and I mean constantly—argue that it does? If you claim that Marxists must believe X, and then you have a dialogue with a Marxist who doesn’t advocate for X, and doesn’t know anyone else similar to him who believes X, then there are only two options. One is to argue that your opponent is, in fact, not a Marxist, which is what Peterson does (the “No True Scotsman” fallacy. Or, you could admit that you were wrong the whole time about what you assumed that your opponents believed, which I think is the right response.

Zizek himself points this out at one point:

“Where did you find this [idea of] egalitarianism? There is one passage in his late ‘Critique of the Gotha Programme‘ where Marx assesses the problem of equality. And he dismisses it as a strict bourgeois category–explicitly, explicitly. For him, Communism is not egalitarianism…”

So the idea that Marxism calls for an absolute equality of outcome is a just figment of Peterson’s imagination (i.e. a Straw Man fallacy). I was curious to learn more about this, so I did a quick Google search, and here’s what I found:

…Marx makes two main points about equality in his 1875 ‘Critique of the Gotha Programme’. Firstly, Marx claims that it makes no sense to speak of equality in the abstract. This is because we can only understand what it means for x to be equal or unequal with y if we first specify the dimensions along which they are being compared. For x to be equal to y is for them to be equal in a particular concrete respect. For example, if x and y are people then they can only be judged equal relative to particular criteria such as their height, how many shoes they own, or how much cake they have eaten. Therefore, one can only be in favour of equality along specific dimensions, such as equality of cake consumption, and never equality as an abstract ideal.

Secondly, Marx claims that advocating equality along one dimension, such as everyone in a society earning the same amount of money per hour worked, will lead to inequality along other dimensions. Everyone earning an equal amount per hour of work would, for example, lead to those who work more having more money than those who work less. As a result, those unable to work a large amount (if at all) such as disabled people, old people, or women who are expected to do the majority of housework, will be unequal with those who can work more, such as the able-bodied, young people, or men. Or those doing manual labour, and so unable to work long hours due to fatigue, will be unequal to those who engage in non-manual labour and so can work more hours. If a society decides to instead ensure equality of income by paying all workers the same daily wage then there would still be inequality along other dimensions. For example, workers who don’t have to provide for a family with their wage will have more disposable income than workers with families. Therefore we can never reach full equality but merely move equality and inequality around along different dimensions.

If Marx was not an egalitarian in the strict sense of the term then what was he? The answer in short is a believer in human freedom and human development. For Marx, the “true realm of freedom” consists in the “development of human powers as an end in itself”. As a result, he conceives of a communist society as one in which “the full and free development of every individual forms the ruling principle”. In such a society there are “[u]niversally developed individuals, whose social relations, as their own communal . . . relations, are hence also subordinated to their own communal control”. This “communal control” includes “their subordination of their communal, social productivity as their social wealth”. Marx therefore justified the forms of equality he did advocate, such as the communal ownership and control of the economy, on the grounds that they led to human freedom and human development, rather than simply because they were egalitarian.

Marx and Engels Were Not Egalitarians (anarchopac). Seems like a good recipe for happiness to me. In fact, Marx’s theory of alienation appears to be the first serious attempt to actually think about the economy’s effects on life satisfaction by any economic thinker, which stands in stark contrast to many of the Classical Liberals, who seemed to think of people as simply work machines without their own goals or aspirations.

So, a simple Google search, not to mention a five-minute conversation with someone who, you know, has actually read Marx’s work beyond the Manifesto, would have invalidated this specious argument. The reason Dr. Zizek wasn’t advocating for equality of outcome is simple—it has nothing whatsoever to do with Marxism! Neither does political correctness or identity politics.

The second point I highlighted above is Peterson’ claim that we don’t know how to do anything to fight inequality. This is a major objection that I’ve had with his views for a longtime. He’s repeated this  statement quite often in his interviews and speeches, such as in his interview with Russell Brand.

Now, he is somewhat correct in the sense that we don’t know how to create a system that eliminates all inequality, or that can operate a complex, technological society without any sort of hierarchy. But as I said, many of the ideas which have been systematically dismantled by Neoliberalism did do a reasonably good job in containing inequality, and helped assure that the wealth generated by capitalism was more broadly shared. And I think the denial of this fact, intentionally or not, is intellectually dishonest. In the previous style of capitalism that existed before Neoliberalism, there was a much more equitable distribution of resources sometimes called “the Great Compression.” Wikipedia talks about this, and gives some reason why it happened:

Economist Paul Krugman gives credit for the compression not only to progressive income taxation but to other New Deal and World War II policies of President Franklin Roosevelt. From about 1937 to 1947 highly progressive taxation, the strengthening of unions of the New Deal, and the wage and price controls of the National War Labor Board during World War II, raised the income of the poor and working class and lowered that of top earners. Krugman argues these explanation are more convincing than the conventional Kuznets curve cycle of inequality driven by market forces because a natural change would have been gradual and not sudden as the compression was.

Explanation for the length of the compression’s lasting have attributed to the lack of immigrant labor in the US during that time (immigrants often not being able to vote and so support their political interests) and the strength of unions, exemplified by Reuther’s Treaty of Detroit—a landmark 1949 business-labor bargain struck between the United Auto Workers union and General Motors. Under that agreement, UAW members were guaranteed wages that rose with productivity, as well as health and retirement benefits. In return GM had relatively few strikes, slowdowns, etc. Unions helped limit increases in executive pay. Further, members of Congress in both political parties significantly overlapped in their voting records and relatively more politicians advocated centrist positions with a general acceptance of New Deal policies.

The end of income compression has been credited to “impersonal forces”, such as technological change and globalization, but also to political and policy changes that affected institutions (e.g., unions) and norms (e.g., acceptable executive pay). Krugman argues that the rise of “movement conservatism”—a “highly cohesive set of interlocking institutions that brought Ronald Reagan and Newt Gingrich to power”—beginning in the late 1970s and early 1980s brought lower taxes on the rich and significant holes in the social safety net. The relative power of unions declined significantly along with union membership, and executive pay rose considerably relative to average worker pay. The reversal of the great compression has been called “the Great Divergence” by Krugman and is the title of a Slate article and book by Timothy Noah.[9] Krugman also notes that era before the Great Divergence was one not only of relative equality but of economic growth far surpassing the “Great Divergence”.

Great Compression (Wikipedia)

The major obstacles to implementing ideas that would reduce inequality today are the Neoliberal ideas which are embraced to one extent or another by both major political parties in the U.S. The polices that would accomplish this compression—and I would argue allowed capitalism to continue to function at all—are now only advocated by people who are considered to be “far Left,” and are depicted as Stalinists, Maoists, and utterly reviled by the conservative media, including Fox News (where Peterson is a frequent guest).

Want to decrease suicide? Raise the minimum wage, researchers suggest (CBS News)

I purposely did not quote from Peterson’s opening 30 minute remarks, which I thought were not nearly as good, largely because they focused on a criticism of Marxism was obviously not well-informed. Admittedly, Zizek did not seize on this, instead reading a rambling and prepared statement which basically constituted his personal  intellectual hobby-horses and did not really address the topic at hand. Here, I will again quote Peter Joseph:

“My focus here will be Jordan Peterson’s [arguments] which are conservative and are on the side of capitalism, if you will…What he does is create a massive straw man, addressing and criticizing the Communist Manifesto, written almost 200 years ago by Karl Marx and Friedrich Engels. His attacks on this book, which as I will explain, are extremely poorly thought out and just wrong, become a proxy for attacks on contemporary activists and thinkers looking to alter the capitalist structure or remove it. His perspective is consistently Libertarian in the modern sense of the word, and his pathological fetish with taking a psychological position rather than any kind of synergistic, sociological relationship in terms of causality or social structure, is to me what makes him one of the more regressive intellectuals out there today, especially considering how popular he has become.”

“And since I’m about to be thrown in to defending Marx, and Progressive thoughts in general, let me make on thing extremely clear. I am not a Marxist, or a Communist, or a Socialist, or whatever. I don’t identify with any of that. I see Marx’s writing as equivalent to other philosophers, from Thomas Hobbes, to Hegel, to Thorstein Veblen, and many others. It’s all information; and some of it is good, some of it is bad–you weigh it all out. The faster all of you people see all of this as information rather than ideological dualities or symbols of something, the faster we can progress the conversation.”

“Likewise, let me clarify one other very important thing. Those that invoke disapproval of historical communism–and rightly so–almost universally say it was a consequence of the writings of Karl Marx. And I would argue that it’s a consequence of the writings of Karl Marx in the same way the Columbine massacre was a consequence of the music of Marilyn Manson. Any respected historian and theorist recognizes that the Soviet Union was actually state capitalism in the extreme. It never achieved any level of theoretical socialism, and certainly not Communism, and if you look at the writings of Vladimir Lenin, he admits to this fact. And again, that’s not defending anything; I’m being intellectually accurate…”

Indeed, it is important to be intellectually accurate, although I don’t know if I’d qualify all of Peterson’s statements as explicitly libertarian, though. Some of it is, but in a few important ways he strays from the libertarian orthodoxy. For example, in his 10-minute response, he says the following:

“I mean, it isn’t obvious to me when Dr. Zizek is speaking in more apocalyptic terms…that we can solve the problems that confront us. And its also not a message that I’ve been purveying that unbridled capitalism, per se, as an isolated social-economic structure, actually constitutes the proper answer to the problems that confront us. I haven’t made that case in anything I’ve written, or any of the lectures that I’ve done, because I don’t believe it to be true…”

Although I’m not how he reconciles this with doing videos for ultralibertarian propaganda outlet PragerU, and similar groups, for instance. But that’s another matter.

It seems that what Peterson really rejects is authoritarianism, which he erroneously conflates with Marxism. If he made a distinction between the two, I would take him more seriously. Authoritarianism—where power is exercised by whim and naked, raw power and without the rule of law and democratic oversight—is the real villain that led to all those needless deaths. Such absolutism can be embraced by people on either side of the political spectrum. Criticize that, absolutely, but don’t smear a long intellectual tradition using silly conspiracy theories, especially if you have not read any of the relevant literature. And, by the way, there are psychologists who’ve done serious, empirical work probing the psychological basis of behind authoritarian political beliefs—Robert Altmeyer in particular—and I wish Peterson would pay more attention to this scientific work (in his own field!) rather than spending all his time bashing “radical Leftists” feminists and “Postmodern Neo-Marxists.” I think he’d be more effective this way if he really is serious about preventing that sort of thinking, and not just bashing phantom enemies. Identity politics, too is a serious problem that crops up on both the Left and Right ends of the political spectrum (anyone remember Sarah Palin and her “real” Americans?)

As Ben Burgis trenchantly noted, Friedrich Nietzsche was appropriated by the National Socialist movement in Germany, and many Nazis were big fans of his writing. But that is unfair to Nietzsche—not only was he not around to defend himself, but he was opposed to things like German nationalism and antisemitism during his lifetime. Yet Peterson has no problem with studying the works and thoughts of Nietzsche. So why does that same logic not hold for those who want to study Karl Marx?

Peterson concludes his statement with this:

“I’ll close with this…There is a positive relationship between economics as measured by income and happiness, on psychological well-being which might be the absence of misery. I certainly do not believe—and the evidence does not suggest—that material security is sufficient. I do believe, however, that insofar as there is a relationship between happiness and material security, that the free market system has demonstrated itself as the most efficient manner to achieve that, and that was actually the terms of the argument.”

“So that’s if the argument is capitalism versus socialism with regards to human happiness. It’s still the case that the free market constitutes the clear winner. And maybe capitalism will not solve our problems. I actually don’t believe that it will. I have in fact argued that the proper pathway forward is one of individual moral responsibility aimed at the highest good, and something for me that’s rooted in our underlying Judeo-Christian tradition that insists that each person is sovereign in their own right, and a locus of ultimate value; which is something that you can accept regardless of your religious presuppositions, and something that you do accept if you participate in a society such as ours. Even the fact that you vote—that you’re charged with that responsibility—is an indication that our society is structured such that we presume that each person is a locus of responsibility and decision-making, of such import that the very stability of the state depends upon the integrity of their character.”

“And so what I’ve been suggesting to people is that they adopt as much responsibility as they possibly can, in keeping with their aim of the highest possible good, which to me is something approximating a balance between what’s good for you as an individual, and what’s good for your family, in keeping with what’s good for you as an individual, and then what’s good for society in the larger frame, such that it’s also good for you and your family. And that’s a form of an elaborated, iterative game; a form of cooperation. It’s a sophisticated way of looking at the ways society can possibly be organized, and I happen to believe that that has to happen at the individual level first, and that’s the pathway forward that I see.”

So there’s much to like here. I’ll note that every single criticism of Peterson I’ve read from the Left—including my own—do not have any objections to his attempt to help people accept responsibility and lead better lives. None. Not one. In fact, most such commentaries are usually highly complimentary and supportive of his attempts to do so. The worst I’ve read about his advice is a dismissal of it as simplistic, facile, or merely common-sense. Clearly, it is not simplistic or common-sense, otherwise there would not be such a receptive audience for his ideas, and such remarks smack more of bitterness than anything else. So I think we should take these efforts by Peterson seriously.

Here’s what I see as Peterson’s most important idea: I think that, sometimes, people use the problems of the world-at-large as a cop-out to avoid dealing with their own problems. In addition to that, people do sometimes use their anger at the wider society as an excuse to justify their own fuck-ups. They rationalize every failure as someone else’s fault instead of accepting the appropriate responsibility for their own actions and behavior. And also, they project their own personal disasters out onto the world as objective facts, when they are really just reflections of one’s own personal issues.

These are real phenomena! And to the extent that Peterson points them out, and helps people to overcome them, I think he does a great job, and I give him all the credit for that. People do sometimes wrap themselves in their own anger and misanthropy when really they should be focusing on fixing themselves and making improvements to their  surroundings, including their relationships with their immediate friends and family, and improving their ability to function as a competent individual in the world.

Focus on being a good person first. Do things that contribute to society and to your own family’s well-being. Focus on those first instead of perceived slights and injustices, many of which are subjective anyway. Realize the world is sometimes unfair, no matter what you do.

Yes, absolutely. 100% agree.

Okay, now for the “but.” There are a lot of problems that cannot be solved at the individual level. For example, if you take responsibility by getting training for an essential profession, and you are crippled with student loan debt because of that, that cannot be solved at the individual level. Similarly, if you have a medical condition where some company buys the patent on your medication and jacks up the price 2000 percent, that is not something that can be solved at the individual level. I’m sure you can come up with numerous other examples.

By keeping us all isolated, working perennially on ourselves alone, it prevents any kind of constructive change. We become solipsistic. We are all a part of society, and cannot cut ourselves off from it. Zizek himself makes this case at one point. He points out that telling someone suffering under the dictatorship of North Korea to make their bed, or to set their own house in perfect order, is ridiculous. The thing that brought down the Berlin Wall was people getting fed up with the system and taking collective social action. The thing that ultimately brought down the Soviet Union was people taking collective social action.

The two are not in opposition. You can work on your own house, sure, but you are still influenced by the broader society around you, and that is unavoidable. And we can and should make that society better, if we can, and that sometimes involves collective action and—dare I it say it—social justice.

“What I don’t quite get—why do you put so much emphasis on ‘We have to begin with personal change?’ This is the first or the second—forgive me I don’t remember—of [the] slogans in your book. You know, ‘First set your house in order, then…'”

“But I have an extremely common-sense naïve question here. What if, in trying to set your house in order, you discover that your house is in this order precisely because the way the society is messed up? Which doesn’t mean, ‘Okay, Let’s forget about my house,’ but you can do both at the same time.”

“I will give you now the argument example: yourself. Isn’t it that you are so socially active because you realize that it’s not enough to tell your patients ‘Set your house in order?’ Much of the reason of why they are in disorder is that there is some crisis in our society…So my reproach to you would be…you know that joke, ‘Tea or coffee? Yes, please.’ Like, ‘Individual or social? ‘Yes, please.'”

“This is obviously an extreme situation…I hope we agree [that] to say to somebody in North Korea, ‘Set your house in order, ha ha!’ But I think in some deeper sense it goes also for our society. I’m just repeating what you are [saying]. You see some kind of a social crisis, and I don’t see clearly why you insist so much on this choice…”

On a final note, there seems to be little correlation to a county’s overall wealth and the observed happiness of its citizens in any case. In many cases, the connection is actually inverted, as in the case of South Korea:

My native South Korea is something of a star performer. With per capita income of around $20,000 (on a par with Portugal), it is not one of the richest countries, but we are talking about a country whose income was less than half that of Ghana’s until the early 1960s. With an annual per capita income growth rate of just under 4%, it is one of the fastest-growing OECD economies.

Once a byword for hyper-exploited sweatshop labour, churning out cheap transistor radios and trainers, the country now possesses the only thing that stands between iPhone and world domination (the Samsung Galaxy). It is also a world leader in industries such as shipbuilding, steel and automobiles.

The country is, per capita, the third most innovative in the world, after Japan and Taiwan, when measured by the number of patents granted by the US patent office. It has one of the world’s highest university enrollment ratios, and schoolchildren who rank in the top five in virtually all standardised international tests.

So, when things seem to be going so swimmingly, why are Koreans clamouring for big changes in the run-up to the general election next week? Because they are desperately unhappy.

According to a recent World Values Survey, Koreans are the second unhappiest people (after Hungary) among the citizens of the 32 OECD countries studied. Worse, its children are the unhappiest in the rich world, according to a survey of 23 OECD countries done by Yonsei University in Seoul. In 2009 the country topped the international league table for suicides, with 28.4 suicides per 100,000 people. Japan was a distant second with 19.7. But Koreans never used to be this unhappy. Until 1995 its suicide rate was, at about 10 per 100,000 people, just below the OECD average. Since then it has almost tripled.

South Korea’s economic reforms – a recipe for unhappiness (The Guardian)

Recently, yet another survey came out, this one supposedly on the “most stressed” and “least stressed” countries in the world. What’s interesting is that both the top and bottom of the scale tend to be relatively poor countries. The top countries were all in Latin America, while the bottom were all in Africa or the Middle East. Of course, the bottom countries tended to be failed states or theocracies dealing with acute hunger and civil war. Apparently Chad has the most negative experiences, which I could have told you from personal experience, LoL.

The annual Gallup Global Emotions Report asked people about their positive and negative experiences. The most negative country was Chad, followed by Niger. The most positive country was Paraguay, the report said. The US was the 39th most positive country, the UK was 46th and India ranked 93rd.

Interviewees were asked questions such as “did you smile or laugh a lot yesterday?” and “were you treated with respect?” in a bid to gain an insight into people’s daily experiences.Around 71% of people said they experienced a considerable amount of enjoyment the day before the survey. The poll found that levels of stress were at a new high, while levels of worry and sadness also increased. Some 39% of those polled said they had been worried the day before the survey, and 35% were stressed.

Latin American countries including Paraguay, Panama and Guatemala topped the list of positive experiences, where people reported “feeling a lot of positive emotions each day.” The poll claims it is reflective of the cultural tendency in Latin America to “focus on life’s positives”.

Despite Chad’s high score for negative experiences, people in the US and Greece were more stressed than Chadians. Greece had the most stressed population in the world with 59% saying they experienced stress on the day before the poll. Around 55% of US adults said they were stressed.

World is angry and stressed, Gallup report says (BBC)

To me, this indicates that happiness has as much to with culture as with economic systems, unless the system can’t produce the most basic Maslowian needs like housing, healthcare, food and personal safety. Zizek makes some interesting comments along these lines in his first 10-minute statement, which would seem to apply to Latin America:

“Years ago I was in Lithuania, and we debated when were people in some perverted sense–and this is the critique of the category of happiness for me–happy. And we came to the crazy result: Czechoslovakia in the 1970s and 1980s [after the Soviet intervention]. Why? For happiness, first, you should not have too much democracy, because this brings the burden of responsibility. Happiness means there is another guy out there and you can put all the blame on him. As the joke went in Czechoslovakia, if there is bad weather, or a storm, [some] Communist screwed it up again.”

“The other condition, the much more subtle condition–is that life was relatively, moderately good, but not perfect. Like, there was meat, but maybe once month there was no meat in the stores. It was very good to remind you how happy you were [when there was meat.] Another thing, they had a paradise which should be at a proper distance: West Germany–affluence. It was not too far, but was not directly accessible.”

So, maybe in your critique of Communist regimes–which I agree with you–you should more focus on something that I experienced. Don’t look only at the terror; at the totalitarian regime. There was a kind of a silent, perverted pact between power and the population, which was: ‘Leave us the power and don’t mess with us, and we guarantee you a relatively safe life, employment, private pleasures, private needs, and so on.’ For me, this is not an argument for the Communists, but against happiness…You know, people said, when the wall fell down, ‘What a wonder!’ in Poland. Solidarnosc, which was prohibited a year ago, now now triumphed in the elections. Who could imagine this? Yes, but the true miracle, in a bad sense, was that four years later democratically the ex-communists came back to power.

For me, this is not an argument for them, but simply for the, let’s call it, the corrupted nature of happiness. My basic formula is: Happiness should be treated as a necessary by-product. If you focus on it you are lost. It comes as a by-product of you working for a cause. That’s the basic thing for me…”

Journey’s End

A few bits of housekeeping. Naked Capitalism is having a Meetup here in Milwaukee on May 1st (May Day!). I will be attending, of course. Should be a fun time.

Second, I’ve been on the fence about giving up writing this blog for a while, but I’ve finally made the decision to abandon it. I have a bunch of posts already written, and will polish them up and put them online over the next few weeks. After that, all publication here will cease.

A sincere thank-you to all the readers who’ve rad read and supported me over the years. I really appreciate all your thoughts and messages.

Until next time…

 

 

 

The Recursive Mind (Review) – 5

So far, in our review of The Recursive Mind, we’ve discovered that recursive thinking lay behind such uniquely human traits as grammatical spoken language, mental time travel, Theory of Mind, higher-order religion, and complex kinship groups.


In the final section of the book, Michael C. Corballis ponders when, how, and why we may have acquired these recursive characteristics.

Whether or not recursion holds the key to the human mind, the question remains how we came to be the way we are–at once so dominant over the other apes in terms of behavior and yet so similar in genetic terms…In modern-day science, it is difficult to avoid the conclusion that the human mind evolved through natural selection, although as we have seen, some recent authors, including Chomsky, still appeal to events that smack of the miraculous—a mutation, perhaps, that suddenly created the capacity for grammatical language…But of course we do have to deal with the seemingly vast psychological distance between ourselves and our closest relatives, the chimpanzees and bonobos. p. 167

What follows is a quick tour through the major milestones in becoming human:

1. Walking/Running

The first and most important change was our standing upright and unique walking gait. While bipedalism may seem in retrospect to offer many obvious advantages, it turns out that it may have been not all that advantageous at all:

Bipedalism became obligate rather than facilitative from around two million years ago, and as we shall see, it was from this point that the march to humanity probably began. That is, our forebears finally gained the capacity to walk freely, and perhaps run, in open terrain, losing much of the adaptation to climb trees and move about in the forest canopy.

Even so, it remains unclear just why bipedalism was retained. As a means of locomotion on open terrain, it offers no obvious advantages. Even the knuckle-walking chimpanzee can reach speeds of up to 48 km per hour, whereas a top athlete can run at about 30 km per hour. Other quadrupedal animals, such as horses, dogs, hyenas, or lions, can easily outstrip us, if not leave us for dead. One might even wonder, perhaps, why we didn’t hop rather than stride, emulating the bipedal kangaroo, which can also outstrip us humans…The impression is that the two-legged model, like the Ford Edsel, was launched before the market was ready for it. Or before it was ready for the market, perhaps. pp. 185-186

Of course, bipedalism left the hands free for tool use, but tool use came much later in the human repertoire, so we couldn’t have evolved walking specifically for that. Persistence hunting also seems to have been a later adaptation, so it was also not likely a primary cause. Another possibility is carrying things, perhaps food or infants. Another possibility is language, which, as Corballis argued earlier, may have originated with hand gestures long before verbal communication. If that’s true, then the capacity for language—as opposed to speech—goes back very far in the human repetoire.

One interesting thing I didn’t know is that even though chimpanzees are by far our closest genetic relatives among the great apes, anatomically we are closer to orangutans. This means that our transition to bipedalism may have developed not from knuckle-walking, as commonly presumed, but from hand-assisted bipedalism, where we walked upright along horizontal branches in the forest canopy, supported by our arms. The knuckle-walking gait of our chimp/bonobo cousins may not derive from our common ancestor, but may have developed after the split from the human lineage as an alternative method of crossing open territory.

The most arboreal of the great apes is the orangutan, which is more distantly related to us than either the chimpanzee or gorilla. Nevertheless its body morphology is closer to that of the human than is that of chimpanzee or gorilla.

In the forest canopy of Indonesia and Malaysia, orangutans typically adopt a posture known as hand-assisted bipedalism, supporting themselves upright on horizontal branches by holding on to other branches, usually above their heads. They stand and clamber along the branches with the legs extended, whereas chimpanzees and gorillas stand and move with flexed legs. Chimpanzees and gorillas may have adapted to climbing more vertically angled branches, involving flexed knees and a more crouched posture, leading eventually to knuckle-walking as the forested environment gave way to more open terrain. If this scenario is correct, our bipedal stance may derive from hand-assisted bipedalism, going back some 20 million years. Knuckle-walking, not bipedalism, was the true innovation. p. 184

In an earlier post we mentioned an updated version of the Aquatic Ape Hypothesis (AAH), today called by some the “Littoral Hypothesis.” The following is taken from another book by Corballis entitled “The Truth About Language”, where he goes into more detail:

Another view is that our forebears inhabited coastal areas rather than the savanna, foraging in water for shellfish and waterborne plants, and that this had a profound influence on our bodily characteristics and even our brain size. This aquatic phase may perhaps have preceded a later transition to the savanna. p. 48…Superficially, at least, it seems to explain a number of the characteristics that distinguish humans from other living apes. These include hairlessness, subcutaneous fat, bipedalism, our large brains, and even language…p. 95…The ability to breathe voluntarily…was an adaptation to diving, where you need to hyperventilate before plunging and then holding your breath during the dive. The fine-motor control over lips, tongue, velum, and throat necessary for producing consonants evolved for the swallowing of soft, slippery foods such as mollusks without biting or chewing. Think of oysters and white wine. p. 162

Philip Tobias once suggested that the term aquatic ape should be dropped, as it had acquired some notoriety over some of its more extravagant claims. [Mark] Verhaegen suggests that the aquatic theory should really be renamed “the littoral theory,” because early Homo was not so much immersed in water as foraging on the water’s edge, diving or searching in shallow water, and probably also roaming inland. p. 162…Tobias died in 2011, but in a chapter published in 2011 he wrote: “In contrast with the heavy, earth-bound view of hominin evolution, an appeal is made here for students of hominin evolution to body up, lighten and leaven their strategy by adopting a far great [sic] emphasis on the role of water and waterways in hominin phylogeny, diversification, and dispersal from one watergirt [sic] milieu to others…p. 95… Even in its modern form the aquatic ape hypothesis (AAH) remains controversial. Verhaegen quotes Wikipedia as asserting that “there is no fossil evidence for the AAH”; he disagrees, citing evidence that “virtually all archaic Homo sites are associated with abundant edible shellfish…162

To me, one of the more convincing arguments for a water-related evolutionary path for humans is the idea of giving birth in water. This may have been the way we accommodated a larger skull size alongside a pelvis designed for upright walking. I got this idea when I read an article several years ago about a woman who gave birth in water. Her husband recorded the birth and put it on the internet. The clip went viral since the birth took place almost effortlessly, with none of the agonizing pain which normally accompanies hospital births. This would also explain why human babies alone among the primates know instinctively to hold their breath in water. Perhaps human females are ideally meant to give birth in water—a practice that is once again achieving popularity in some alternative circles. Here’s the story:

Mum’s water birth video stuns the internet (BBC)

At this time, human ancestors adopted a “dual-mode” existence, using facilitative bipedalism, to migrate across the rapidly expanding grasslands, but retaining the ability to take refuge in the tree canopy if needed. This is evident from the anatomy of the various species of  Australopithecus, who were able to walk upright, but retained hands, feet and arms that allowed them to climb trees and hang from branches. These chimp-sized animals may have adopted a scavenging mode of existence to get their daily protein; cracking bones with stones to get at marrow inside, and stealing eggs from nests, while retreating back into the forest canopy when faced with fiercer competition. The most famous Australopithecus, Lucy, appears to have died by falling out of a tree. They may have done this activity during the heat of the day, leading to the gradual loss of hair and addition of sweat glands, since many big predators are crepuscular or nocturnal. They were already fairly social animals, and when threatened, they may have responded by banding together and hurling stones at their predators and rivals when threatened, as explored further below.

2. Throwing

Humans are able to throw projectiles with much greater force and accuracy than any other primate, or really any other animal. Of course, other monkeys and chimps do hurl things (such as poo) when they are upset. But in humans, this ability evolved to its furthest extent.

We probably first began by throwing rocks and stones, a technique that is far more effective when done in groups, as William Von Hippel noted in The Social Leap. From there, we graduated to spears, javelins, and boomerangs, and then invented devices to further enhance our throwing capacity such as spear-throwers (woomeras) and slings. Slings continued to be devastating weapons on the battlefield well into historical times–during Roman times, the most famous and effective slingers came from the Balearic Islands in the western Mediterranean, and were widely employed as mercenaries.

Paul Bingham has argued that one of the characteristics that have reinforced social cohesion in humans is the ability to kill at a distance. Human societies can therefore be rid of dissenters in their midst, or threats from the outside, with relatively little threat of harm to the killer! Nevertheless the dissenters, or the rival band, may themselves resort to similar tactics, and so began an arms race that has continued to this day. It started, perhaps, with the throwing of rocks, followed in succession by axes, spears, boomerangs, bows and arrows, guns, rockets, bombs, and nuclear missiles, not to mention insults. Such are the marks of human progress…

Whether or not it was throwing that sustained bipedalism in an increasingly terrestrial existence, it does at least illustrate that bipedalism frees the hands for intentional potentially skilled action. It allows us to use our hands for much more than specifically chucking stuff about. Moreover, our primate heritage means that our arms are largely under intentional control, creating a new potential for operating on the world, instead of passively adapting to it. Once freed from locomotory duty, our hands and arms are also free to move in four-dimensional space, which makes them ideal signaling systems for creating and sending messages… p. 190

The idea that we evolved to throw also helps explain the mystery of the seeming perfection of the human hand, described by Jacob Bronowski as “the cutting edge of the mind.” Adding to the power and precision of throwing are the sensitivity of the fingers, the long opposable thumb, and the large area of the cortex involved in control of the hand. The shape of the hand evolved in ways consistent with holding and hurling rocks of about the size of modern baseballs or cricket balls, missile substitutes in modern pretend war. In real war, hand grenades are about the same size.

Our hands have also evolved to provide two kinds of grip, a precision grip and a power grip, and Richard W. Young suggests that these evolved for throwing and clubbing, respectively. Not only do we see young men throwing things about in sporting arenas, but we also see them wielding clubs, as in sports such as baseball, cricket, hockey, or curling. In various forms of racquet sports, the skills of clubbing and throwing seem to be combined. p. 188

3. Extended Social Groups

The last common ancestor of humans and chimps may have lived in the Pliocene Epoch which began some 5.333 million years ago. As it ended, it transitioned into the Pleistocene, which was characterized by a rapidly changing climate which featured a recurring series of crippling ice ages. The Pleistocene lasted from about 2.588 million years ago, to roughly 12,000 years ago, succeeded by the more stable climate of the Holocene. It was during the Pleistocene era that woodlands shrank and were replaced by open grasslands. During this time, the genus Homo emerged, and human ancestors permanently transitioned from an arboreal existence to becoming savanna-based hunter-gatherers, possibly beginning as scavengers.

Adaptability was key. Any species which relied solely on the slow pace of genetic evolution would have been at a severe disadvantage in the rapidly changing world of the Pleistocene. The newly-evolved Homo genus, with its omnivorous diet, free hands for tool use, upright gait, large brains, and gregarious social nature, was ideally suited for this epoch. During this time, walking went from facilitative to obligatory, and we left the tree canopy behind for good. Homo habilis was already using stone tools near the beginning of this era (although earlier Australopithecines may have used some tools as well). Then came a plethora of upright-walking, tool-using apes: Homo rudolfensis; Homo ergaster; and the always-popular with schoolchildren Homo erectus.

Humans have been using tools for 300,000 years longer than we thought (io9)

Coming from the forest and moving out onto the savanna, these apes could not compete on speed, strength or aggressiveness against the big predators. What did they do? The solution was to form larger and more tightly-knit social groups. It is these social groups that are thought to have been a primary driver behind increasing intelligence and brain expansion (about which, more below).

An especially dangerous feature of the savanna was the presence of large carnivorous animals, whose numbers peaked in the early Pleistocene. They included at least 12 species of saber-tooth cats and nine species of hyena. Our puny forebears had previously been able to seek cover from these dangerous predators in more forested areas, and perhaps by retreating into water, but such means of escape were relatively sparse on the savanna. Not only did the hominins have to avoid being hunted down by these professional killers, with sharp teeth and claws, and immense speed and strength, but they also had to compete with them for food resources. p. 192

It was human intelligence and sociability that allowed our ancestors to survive in this threatening environment—a combination of man’s “intellectual powers,” and “social qualities,” as Charles Darwin put it.

The hominins therefore built on their primate inheritance of intelligence and social structure rather than on physical attributes of strength or speed. This is what might be termed the third way, which was to evolve what has been termed the “cognitive niche,” a mode of living built on social cohesion, cooperation, and efficient planning. It was a question of survival of the smartest. p. 194

It was not only our social nature, but our unique social strategy which is different from all other primates. Simply put, we developed extended families. We also developed cooperative child-rearing and pair-bonding, which allowed us to evolve larger social groups that other primates, who remain largely in fixed-size groups throughout their lives and do not typically develop deep relationships outside of it.

Sarah Blaffer Hrdy was argued that social bonding evolved first in the context of child rearing, She points out that great apes are loathe to allow others to touch their infants during the first few months, whereas human mothers are very trusting in allowing others to carry and nurture their babies. This is evident not only in daycare centers, but in extended families units [sic] that characterize many peoples of the world. Among New Zealand Maori, for instance, initial teaching and socialization is based on a larger unit known as whanau, which is the extended family, including children, parents, grandparents, cousins, uncles, aunts, and often beyond. The understanding of whanau is recursive, looping back many generations. p. 194

It takes a village indeed!

This puts paid to all seventeenth-century English Liberal notions of government that rely on “voluntary associations” or purposeful submission to a despot in exchange for protection and order. Governments did not form in order to secure “private property” as John Locke argued, nor were early societies a “war of all against all” as Hobbes thought—we would have gone extinct long ago if that were the case. It is private property, not social organization, which is novel in the human experience. Since extended families and kinship groups predate the genus Homo, the fact is that we humans have never had to make any sort of conscious, rational decision to form complex social groups—we are literally born into them! The question is, rather, how such groups evolved from the small tribal societies of the past into today’s large, impersonal, market-based nation-states.

4. Controlled use of Fire

At some point, humans harnessed fire, the only species (that we know of ) to do so. To get a bit more technical, we harnessed a source of extrasomatic energy, later supplemented by fossil fuels. Exactly when this occurred, however, is a matter of debate. Fire does not fossilize, and while the results of ancient combustion can be detected, it is often difficult to determine whether these were natural or artificially-controlled fires. Rather, arguments for a very archaic use of fire come primarily from human anatomy—humans are adapted to cooked food and cannot survive on strictly raw food diets, unlike chimps and gorillas. This leads to the conclusion that humans have been using fire long enough to evolve a dependency on it—certainly for hundreds of thousands of years, at least. Our small jaws, duller teeth, shorter intestines, and bulbous skulls all derive from anatomical changes due to cooking. Some recent evidence has suggested fire use over one million years ago. It indicates that that sitting around a campfire and telling stories has been part of social bonding since time immemorial.

Richard Wrangham has suggested that the secret of hominin evolution originated in the controlled use of fire, which supplied warmth and protection from hostile predators. From around two million years ago, he thinks, Homo erectus also began to cook tubers, greatly increasing their digestibility and nutritional value. Cooked potatoes, I’m sure you will agree, are more palatable than raw ones. Other species may have been handicapped because they lacked the tools to dig for tubers, or the means to cook them.

Cooked food is softer, leading to the small mouths, weak jaws, and short digestive system that distinguish Homo from earlier hominins and other apes. Cooking also led to division of labor the sexes, with women gathering tubers and cooking them while the men hunted game. At the same time, these complementary roles encouraged pair bonding, so that the man can be assured of something to eat if his hunting expedition fails to produce meat to go with the vegetables. p. 195

5. Rapid Brain Expansion

During the Pleistocene, the human brain underwent a remarkable and unprecedented expansion for reasons that are still debated. From Australopithecenes to archaic humans, the brain roughly tripled in size. The size of the brain is correlated roughly to an organism’s body size. This is known as the encephalization quotient. Given human’s relatively small body size, our brains are much larger than they “should” be. It’s also an energy hog, taking up some 20 percent of our metabolism to keep running.

Fossil evidence shows that brain size remained fairly static in the hominins for some four million years after the split from the apes. For example, Australopithecus Afarensis…had a brain size of about 433 cc, slightly over the chimpanzee size of about 393 cc, but less than that of the much larger gorilla at 465 cc. It was the emergence of the genus Homo that signaled the change. Homo habilis and Homo rudolfensis were still clumsily bipedal but their brains ranged in size from around 500 CC to about 750 CC, a small increase over that of earlier hominins. Homo ergaster emerged a little over 1.8 million years ago, and by some 1.2 million years ago boasted a brain size a some 1,250 cc. Thus in a space of about 750,000 years, brain size more than doubled—that’s pretty quick on an evolutionary time scale.

Brain size continued to increase at a slower rate. It appears to have reached a peak, not with Homo sapiens, dating from about 170,000 years ago, but with Neanderthals…In some individual Neanderthals, brain capacity seems to have been as high as 1,800 cc, with an average of about 1,450 cc. Brain size in our own species, Homo sapiens, is a little lower, with a present-day average f about 1,350 cc, but still about three times the size expected of an ape with the same body size…this final increase in brain size—the dash for the summit as it were—seems to have coincided with an advance in technological innovation over that which had prevailed for the previous 1.5 million years. pp. 198-199

It’s not just the expansion of the brain that is remarkable, but the expansion of the neocortex, or “outer shell” of the brain where many of the “higher” cognitive functions reside. Here, too, we find that the human neocortex is much larger than expected given the size of the brain and body. The size of the neocortex is roughly correlated with intelligence and the size of the social group in mammals, giving us some indication of the intelligence and group size of early human ancestors. “Humans have the largest neocortical ratio, at 4.1, closely followed by the chimpanzee at 3.2. Gorillas lumber in at 2.65, orangutans at 2.99, and gibbons at 2.08. According to the equation relating group size to neocortical ratio, humans should belong to groups of 148, give or take about 50. This is reasonably consistent with the estimated sizes of early Neolithic villages.” (p. 198)

Robin Dunbar has suggested that even though Neanderthal brains were larger overall than Homo sapiens, more of their neocortex was devoted to visual processing—their skulls indicate eyes that were 20 percent larger than our own. This was an adaptation to the darkness of the northern climes. The running of this enlarged visual system, he argues, precluded parts of the brain from being harnessed for other uses—social uses in particular. Thus, Neanderthals were not able to develop larger groupings, or things such as higher-order religions and recursion, he argues. Homo sapiens, evolving in the more tropical regions of Africa, did not have this same handicap.

Perhaps the most extraordinary revelation from this chapter is that there appear to be significant genetic changes to the brain within recorded history!

We are beginning to learn something of the genetic changes that gave us our swollen heads. One gene known to be a specific regulator of brain size is the abnormal spindle-like microcephaly associated (ASPM) gene, and the evidence suggests strong positive selection of this gene in the lineage leading to Homo sapiens. Indeed, a selective sweep appears to have occurred as recently as 5,800 years ago, suggesting that the human brain is still undergoing rapid evolution. Another gene known as microcephalin (MCPH6) has also been shown to regulate brain size, and one variant in modern humans arose an estimated 37,000 years ago. Other genes involved in the control of brain size that have undergone accelerated rates of protein evolution at points in human lineage have also been identified. p. 199

What’s most extraordinary about this information, given our discussion of Julian Jaynes’s theories, is that the evidence above indicates a selective sweep of genes that affect brain development in exactly the time-frame specified by Jaynes—something that Jaynes’s critics have always claimed was patently impossible! Of course, this does not mean that these genes are what lay behind his hypothesized development of “consciousness”—only that it is possible that there were indeed changes to how the brain functions within recorded history.

Often it’s claimed that the breakdown of the bicameral mind was due to a massive change in the brain’s architecture. Critics mistakenly assert that Jaynes implied that the corpus callosum—the massive bundle of nerves that connects the two hemispheres—evolved during historical times. But Jaynes claims nothing of the sort! While he discusses split-brained patients (with a severed corpus callosum) in order to understand the separate functions of each hemisphere, nowhere does he imply any recent anatomical changes to the brain’s basic structure. And, besides, the fact that hearing voices is common in humans today, indicates that such a massive change is not needed in any case. Rather, only a slight change in perception was required. Jaynes suggests that it was an inhibition in communication between Broca’s and Wernicke’s areas, which are connected by the anterior commisure, which might have contributed to the breakdown of the bicameral mind. There is also evidence that the amount of “white matter” in the brain (as contrasted with gray matter), changes brain function, and abnormalities in white matter have been associated with schizophrenia and other mental illnesses. We have no idea whether the genes specified above had anything to do with this, of course. But preliminary data show that this gene does not affect IQ, so it was not raw intelligence which cause the selective sweep of the ASPM gene. Could this gene have altered some of the functioning of the brain much in the manner Jaynes described, and did this give rise to the recursive “self” developing and expanding sometime during the late Bronze Age? Here we can only speculate.

Here’s anthropologist John Hawks explaining the significance of this discovery:

Haplogroup D for Microcephalin apparently came under selection around 37,000 years ago (confidence limit from 14,000 to 60,000 years ago). This is very, very recent compared to the overall coalescence age of all the haplotypes at the locus (1.7 million years). Some populations have this allele at 100 percent, while many others are above 70 or 80 percent. Selection on the allele must therefore have been pretty strong to cause this rapid increase in frequency. If the effect of the allele is additive or dominant, this selective advantage would be on the order of 2 or 3 percent — an advantage in reproduction.

The story for ASPM is similar, but even more extreme. Here, the selected allele came under selection only 5800 years ago (!) (confidence between 500 and 14,100 years). Its proliferation has almost entirely occurred within the bounds of recorded history. And to come to its present high proportion in some populations of near 50 percent in such a short time, its selective advantage must have been very strong indeed — on the order of 5 to 8 percent. In other words, for every twenty children of people without the selected D haplogroup, people with a copy of the allele averaged twenty-one, or slightly more.

Recent human brain evolution and population differences (john hawks weblog)

In a bizarre Planet of the Apes scenario, Chinese scientists have recently inserted human genes related to brain growth and cognition into moneys in order to determine what role genes play in the evolution of intelligence:

Human intelligence is one of evolution’s most consequential inventions. It is the result of a sprint that started millions of years ago, leading to ever bigger brains and new abilities. Eventually, humans stood upright, took up the plow, and created civilization, while our primate cousins stayed in the trees.

Now scientists in southern China report that they’ve tried to narrow the evolutionary gap, creating several transgenic macaque monkeys with extra copies of a human gene suspected of playing a role in shaping human intelligence.

“This was the first attempt to understand the evolution of human cognition using a transgenic monkey model,” says Bing Su, the geneticist at the Kunming Institute of Zoology who led the effort…

…What we know is that our humanlike ancestors’ brains rapidly grew in size and power. To find the genes that caused the change, scientists have sought out differences between humans and chimpanzees, whose genes are about 98% similar to ours. The objective, says, Sikela, was to locate “the jewels of our genome”—that is, the DNA that makes us uniquely human.

For instance, one popular candidate gene called FOXP2—the “language gene” in press reports—became famous for its potential link to human speech. (A British family whose members inherited an abnormal version had trouble speaking.) Scientists from Tokyo to Berlin were soon mutating the gene in mice and listening with ultrasonic microphones to see if their squeaks changed.

Su was fascinated by a different gene: MCPH1, or microcephalin. Not only did the gene’s sequence differ between humans and apes, but babies with damage to microcephalin are born with tiny heads, providing a link to brain size. With his students, Su once used calipers and head spanners to the measure the heads of 867 Chinese men and women to see if the results could be explained by differences in the gene.

By 2010, though, Su saw a chance to carry out a potentially more definitive experiment—adding the human microcephalin gene to a monkey…

Chinese scientists have put human brain genes in monkeys—and yes, they may be smarter (MIT Technology Review)

One of the more remarkable theories behind brain growth argues that a virus, or perhaps even symbiotic bacteria, helped along human brain growth, and hence, intelligence. Robin Dunbar raises the intriguing possibility that brain expansion was fuelled by a symbiotic alliance with the tuberculosis bacterium!

The problem of supporting a large brain is so demanding that it may have resulted in the rather intriguing possibility that we used external help to do so in the form of the tuberculosis bacterium. Although TB is often seen as a terrible disease, in fact only 5 per cent of those who carry the bacterium are symptomatic, and only a proportion of those die (usually when the symptoms are exacerbated by poor living conditions). In fact, the TB bacterium behaves much more like a symbiont than a pathogen – even though, like many of our other symbionts, it can become pathogenic under extreme conditions. The important issue is that the bacterium excretes nicotinamide (vitamin B3), a vitamin that turns out to be crucial for normal brain development. Chronic shortage of B3 rapidly triggers degenerative brain conditions like pellagra. The crucial point here is that vitamin B3 is primarily available only from meat, and so a supplementary source of B3 might have become desirable once meat came to play a central role in our diet. Hunting, unlike gathering, is always a bit chancy, and meat supplies are invariably rather unpredictable. This may have become even more crucial during the Neolithic: cereals, in particular, are poor in vitamin B3 and a regular alternative supply might have become essential after the switch to settled agriculture.

Although it was once thought that humans caught TB from their cattle after domestication around 8,000 years ago, the genetic evidence now suggests that human and bovine TB are completely separate strains, and that the human form dates back at least 70,000 years. If so, its appearance is suspiciously close to the sudden upsurge in brain size in anatomically modern humans that started around 100,000 years ago. Human Evolution: Our Brains and Behavior by Robin Dunbar; pp. 248-249

6. Childhood

It’s not just brain size. The human brain undergoes an unusually large amount of development after birth, unlike most other species, even other great apes. Other great apes don’t have things like extended childhoods and adolescence. It leads to the helplessness and utter dependency of our infants in the near term, but it has a big payoff in social adaptability in the long term. It means that humans’ intellectual capabilities can be—to a large extent—shaped by the environment they are born into, rather than just genes. The brain is “wired up” based on the needs of the environment it is born into. This affects things like language and sociability. This is key to what we saw above: adaptability and behavioral flexibility were the key to our species’ success.

Another critical difference between humans and other primates lies in the way in which the human brain develops from birth to adulthood. We humans appear to be unique among our fellow primates, and perhaps even among the hominins, in passing through four developmental stages–infancy, childhood, juvenality, and adolescence…During infancy, lasting from birth to age two and a half, infants proceed from babbling to the point that they know that words or gestures have meaning, and can string them together in two-word sentences. This is about the level that the bonobo Kanzi has reached…it is the next stage, childhood, that seems to be especially critical to the emergence of grammatical language and theory of mind…Childhood seems to be the language link that is missing in great apes and the early hominins, which may account for the fact that, so far at least, great apes have not acquired recursive grammar. But it is also during childhood that theory of mind, episodic memory, and understanding of the future emerge. Childhood may be the crucible of the recursive mind.

During the juvenile phase, from age 7 to around 10, children begin to appreciate the more pragmatic use of language, and how to use language to achieve social ends. The final stage is adolescence, which…is…unique to our own species, and sees the full flowering of pragmatic and social function, in such activities as storytelling, gossip, and sexual maneuvering. Adolescence also has a distinctive effect on male speech, since the surge of testosterone increases the length and mass of the vocal folds, and lowers the vibration frequency…

Locke and Bogin focus on language, but the staged manner in which the brain develops may account more generally for the recursive structure of the human mind. Recursive embedding implies hierarchical structure, involving metacontrol over what is embedded in what, and how many layers of embedding are constructed. Early development may establish basic routines that are later organized in recursive fashion.
pp. 201-203

I’ve always been struck by how children who are more intellectually precocious tend to take longer to mature—they are “late bloomers.” In contrast, there are those who mature very quickly and then hit a plateau. Of course, we lump them all together in prison-like schools according to chronological age , despite highly variable developmental speed and gender differences. This leads to all sorts of  bulling and abuse, as the faster-developing “jocks” torment the slower-developing “nerds”—a feature unique to modern industrial civilization. The emotional scarring resulting from this scenario causes incalculable amounts of suffering and misery, but I digress…

Human children are the most voracious learners planet Earth has ever seen, and they are that way because their brains are still rapidly developing after birth. Neoteny, and the childhood it spawned, not only extended the time during which we grow up but ensured that we spent it developing not inside the safety of the womb but outside in the wide, convoluted, and unpredictable world.

The same neuronal networks that in other animals are largely set before or shortly after birth remain open and flexible in us. Other primates also exhibit “sensitive periods” for learning as their brains develop, but they pass quickly, and their brain circuitry is mostly established by their first birthday, leaving them far less touched by the experiences of their youth.

Based on the current fossil evidence, this was true to a lesser extent of the 26 other savanna apes and humans. Homo habilis, H. ergaster, H. erectus, even H. heidelbergensis (which is likely the common ancestor of Neanderthals, Denisovans, and us), all had prolonged childhoods compared with chimpanzees and gorillas, but none as long as ours. In fact, Harvard paleoanthropologist Tanya Smith and her colleagues have found that Neanderthals reversed the trend. By the time they met their end around 30,000 years ago, they were reaching childbearing age at about the age of 11 or 12, which is three to five years earlier than their Homo sapiens cousins…

We are different. During those six critical years, our brains furiously wire and rewire themselves, capturing experience, encoding and applying it to the needs of our particular life. Our extended childhood essentially enables our brains to better match our experience and environment. It is the foundation of the thing we call our personalities, the attributes that make you you and me me. Without it, you would be far more similar to everyone else, far less quirky and creative and less, well … you. Our childhood also helps explain how chimpanzees, remarkable as they are, can have 99 percent of our DNA but nothing like the same level of diversity, complexity, or inventiveness.

The Evolution of childhood: Prolonged development helped Homo sapiens succeed (Slate)

7. Tool Use

Humans used tools largely in the context of hunting and butchering large prey. Humans probably also used tools to secure other resources, such as the digging up of tubers mentioned earlier. Gourds and eggshells are used by foragers to carry water. Slings may have been used for rocks from a long time ago. In the book The Artificial Ape, archaeologist Timothy Taylor argued that humans must have used baby slings—probably made from animal pelts—to carry their infants as far back as a million years ago. He makes this case since infants cannot walk effectively for the first few years of their life, and since early humans were constantly on the move, mothers must have had some way of efficiently carrying their offspring that left their hands free (other apes cling to mother’s hair—not an option for us). He argues that the sling is critical to allowing our infants to be born as helpless as they are, and thus facilitated the extended infancy described above. Fire may have also been a useful tool—many cultures around the world have used it to reshape the natural landscape and drive game.

When looking at the long arc of history, what stands out is not so much the rapidity of cultural change, but rather just how slow tool use and development was over millions of years. While today we are used to rapid, constant technological change, during the Pleistocene toolkits often remained unchanged for hundreds of thousands of years. So much for innovation!

Nevertheless advances in toolmaking were slow. There is little to suggest that the early hominins were any more adept at making or using tools than are present-day chimpanzees, despite being bipedal, and it was not really until the appearance of the genus Homo that toolmaking became more sophisticated.

The earliest such tools date from about 2.5 million years ago, and are tentatively associated with H, rudolfensis. These tools, relatively crude cutters and scrapers, make up what is known as the Oldowan industry. A somewhat more sophisticated tool industry, known as the Acheulian industry, dates from around 1.6 million years ago in Africa, with bifacial tools and hand-axes…The Acheulian industry remained fairly static for about 1.5 million years, and seems to have persisted in at least one human site dating from only 125,000 years ago. Nevertheless, there was an acceleration of technological invention from around 300,000 to 400,000 years ago, when the Acheulian industry gave way to the more versatile Levallois technology. Tools comprising combinations of elements began to appear, including axes, knives and scrapers mounted with hafts or handles, and stone-tipped spears. John F. Hoffecker sees the origins of recursion on these combinatorial tools, which were associated with our own forebears, as well as with the Neanderthals, who evolved separately from around 700,000 years ago. pp. 205-206

Corballis speculates that the rapid tool advancement seen in more recent Homo sapiens owes its origins more to our evolved social capabilities than to developments resulting from the primitive crude stone tools of our earlier ancestors: “My guess is that recursive thought probably evolved in social interaction and communication before it was evident in the material creations of our forebears. The recursiveiness and generativity of technology, and of such modern artifacts as mathematics, computers, machines, cities, art, and music, probably owe their origins to the complexities of social interaction and story telling, rather than to the crafting of tools…” (p. 206)

The full flowering of stone tool technology came during a period called the Upper Paleolithic, or Late Stone Age, also associated with such behaviorally modern artifacts as sculptural “Venus” figurines, cave paintings, and deliberate burials (indicating some rudimentary religious belief). The adoption of such “modern” behavioral traits, and the adoption of vastly more sophisticated tools is related, argues Corballis:

This second wave of innovation was most pronounced in Europe and western Asia, beginning roughly when Homo sapiens arrived there. The Upper Paleolithic marked nearly 30,000 years of almost constant change, culminating in a level of modernity equivalent to that of many present-day indigenous peoples. Technological advances included clothing, watercraft, heated shelters, refrigerated storage pits, and bows and arrows. Elegant flutes made from bone and ivory have been unearthed in southwest Germany, dated at some 40,000 years ago, suggesting early musical ensembles…Flax fibers dating from 30,000 years ago have been found in a cave in Georgia, and were probably used in hafting axes and spears, and perhaps to make textiles; and the presence of hair suggests also that they were used to sew clothes out of animal skins. The people of this period mixed chemical compounds, made kilns to fire ceramics, and domesticated other species.

Stone tools date from over two million years ago, but remained fairly static until the Upper Paleolithic, when they developed to include more sophisticated blade tools, as well as burins and tools for grinding. Tools were also fashioned from other materials, such as bone and ivory, and included needles, awls, drills, and fishhooks…p. 214

8. Migration

The general consensus today is that all modern humans are descended from groups that left Africa after 70,000 years ago, perhaps driven by climate change. These migrants eventually displaced all earlier species of archaic Homo. We also know that  some interbreeding between our ancestors and these other species took place. Humans carry DNA signatures from Neanderthals, Denisovans, and an as-yet-undiscovered human ancestor.

Evolutionary biologists have classified six major haplogroups of humans: L0, L1, L2, L3, M and N. A haplogroup is a large grouping of haplotypes, which are groups of alleles (variant forms of a gene) inherited from a single parent. In this case, geneticists used mitochondrial DNA, which is inherited exclusively from our mothers, to specify the haplogroups. Mitochondria—the “battery” of the cell, began its existence as symbiotic bacteria, and thus has a distinct genetic signature. Of the four “L” haplogroups, only L3 migrated out of Africa. The M and N haplogroups are a descendants of the L3 haplogroup. Haplogroup M has a more recent common ancestor than haplogroup N, and is found both inside and outside Africa. All indigenous lineages outside of Africa are derived from the M and N haplogroups, exclusively.

Why haplogroup L3 alone migrated out of Africa is a big question. Another related big question for evolutionary biologists is, how much of modern human behavior existed in Africa prior to this outmigration, and how much of human behavior arose after it? For example, did complex spoken language evolve before or after we left Africa? What about symbolic thought, art, religion, and sophisticated tool use? Did we use fire? Given that fact that sapiens displaced all the earlier hominins who had evolved outside of the continent (most likely from Homo heidelbergensis, and perhaps a few remote branches of erectus), we must have had some kind of innate advantage over the native inhabitants, the thinking goes. What exactly it was has proved harder to determine, but recursion might well be the answer.

The population of the earliest lineage, L0, is estimated to have expanded through the period 200,000 to 100,000 years ago. . . The L0 and L1 lineages exist at higher frequencies than the other lineages among present-day hunter-gatherers, who may therefore offer a window into the early history of Homo sapiens…The L3 lineage is of special interest, because it expanded rapidly m size from about 60,000 to 80,000 years ago, and seems to have been the launching pad for the migrations out of Africa that eventually populated the globe. Of the two non-African lineages that are the immediate descendants of L3, lineage M is estimated to have migrated out of Africa between 53,000 and 69,000 years ago, and lineage N between 50,000 and 64,000 years ago.

Why did L3 expand so rapidly, and migrate from Africa? One suggestion 1s that L3 gained some cultural advantage over the other lineages, perhaps through the invention of superior technologies, and that this gave them the means to migrate successfully. Paul Mellars suggests that the African exodus was predated by advances in toolmaking, including new stone-blade technologies, the working of animal skins, hafted implements, and ornaments. Some of the improvements in tool technology can be attributed to the use of fire to improve the flaking properties of stone, which dates from around 72,000 years ago in the south coast of Africa…

It need not follow that the L3 people were biologically more advanced than their African cousins, and it may well be that the exodus was driven by climate change rather than any technical superiority of L3 over the other haplogroups that remained in Africa. During the last ice age, there were a series of rapid climate swings known as Heinrich events. One of these events, known as H9, seems to have occurred at the time of the exodus from Africa, and was characterized by cooling and loss of vegetation, making large parts of North, West, and East Africa inhospitable for human occupation. It may also have been accompanied by a drop in sea levels, creating a land bridge into the Levant. So out of Africa they went, looking no doubt for greener pastures.

The exodus seems to have proceeded along the coast of the Red Sea, across the land bridge, and then round the southern coasts of Asia and southeast Asia, to reach New Guinea and Australia by at least 45,000 years ago. Mellars notes similarities in artifacts along that route as far as India, but remarks that technology seems to have declined east of India, especially in Australia and New Guinea. This may be attributable, he suggests, to the lack of suitable materials, adaptation to a more coastal environment requiring different technologies, and random fluctuations (cultural drift). A remarkable point of similarity, though, is the presence of red ochre in both Africa and in the earliest known human remains in Australia. Ochre was probably used in ritualistic body-painting, and perhaps in painting other surfaces. pp. 209-211

9. The Rise of Agriculture

Of course, the wild climate swings of the Pleistocene era eventually came to and end giving way to the more climatically stable (to date) Holocene epoch. As the Last Glacial Maximum (LGM) came to a close, the earth underwent a massive de-glaciation, sending massive amounts of cold, fresh water into the world’s oceans. Sea levels rose, and many land areas became submerged, such as Berinigia (isolating the Americas), Doggerland (isolating Britain) and the Sahul Shelf (isolating Australasia). The melting glaciers caused the climate to undergo a rapid shift once again, killing off large numbers of the megafauna that earlier humans had relied on as their primary food source—animals such as the Wooly mammoth and ground sloth. The vast herds of reindeer that had provided sustenance for Paleolithic Europeans retreated northwards with the receding taiga, and southern Europe became heavily forested with larch and birch trees. In reaction, many human ancestors found themselves living in forests and grasslands once gain, relying more and more on smaller, more solitary prey animals, and plant foods such as fruits, seeds and nuts—a change sometimes referred to as the Broad Spectrum Revolution.

We know that the domestication of cereals dates from about 10-12,000 years ago in the Fertile Crescent—present-day Iraq, Syria, Lebanon, Israel, Kuwait, Jordan, southeastern Turkey and southwestern Iran. What’s less clear, however, is just how long these plants were cultivated before we decided to grow them intensively enough to alter their DNA to the point where these plants became dependent upon us (and we upon them). Recent evidence keeps pushing this horticultural activity—”proto-farming”—further and further back into the past, suggesting that agriculture is less of an anomaly or innovation than formerly thought. It apparently coexisted for a long time along seasonal hunting and foraging in the Near East. In addition, it appears that other desirable plant foods like figs and legumes were cultivated long before cereal grains. In some cases, the Neolithic Revolution appears to have been actively resisted for as long as possible by many cultures.

After the colder, dryer Younger Dryas period ended about 12,000 years ago, humans began to settle down in various grassy foothills and river valleys around the world and more intensively cultivate plant foods—especially cereal crops—which began the long march toward civilization, for better or worse.

10. Final Conclusions

Corballis strongly argues here (as he has in several books) that language originated with gestures, possibly before human ancestors migrated from Africa. Verbal speech, by contrast, came about much later, and may originate sometime after the exodus from Africa—perhaps as recently as 50-60,000 years ago based on anatomical evidence.

He argues that second- or perhaps third-order recursion was sophisticated enough to account for many of the types of behaviors we see in archaic humans (such as cooperative hunting and rudimentary religious beliefs), but that higher levels of recursive thought were inaccessible to them. These, he says, are unique to Homo sapiens, and may have begun as recently as 30,000 years ago during the Upper Paleolithic era, but we don’t know for sure.

He argues that these recursive abilities were mainly the result of human social needs, which then exploded into other diverse areas such as art, music, religion, and—perhaps most significantly—grammatical language, which can combine recursively to form an infinite number of ideas and concepts. Much later, things like advanced technology, science and mathematics flowed from these same recursive abilities as human societies grew ever larger and more complex. Humans’ ability to plan for and anticipate alternative futures is far more sophisticated than in any other species.

These recursive abilities also gave us the ability to know what others are thinking, leading directly to cumulative memetic evolution—passing down ideas and concepts, and adding to and extending them over time. No other species can do this as we can. Recursive thought also gave birth to mental time travel, allowing human thought to roam both the past and the future, and imagine alternative futures, or even fictional ones—i.e. stories, which bind human societies together.  Stories gave rise to more complicated social groups which are recursively nested in expanding circles of kinship and affiliation.

By looking at simpler examples from around the animal kingdom, Corballis argues that the development of these abilities was not a sudden, random and inexplicable event as some have argued. Rather, he says, it was the natural outcome of the same evolutionary processes that led to all the other mental and physical abilities that make us unique in the animal kingdom:

In this book, I have tried to argue that recursion holds the key to that difference in mind, underlying such uniquely such uniquely human characteristics as language, theory of mind, and mental time travel. It was not so much a new faculty, though, as an extension of existing faculties…there is no reason to suppose that the recursive mind evolved in some single, miraculous step, or even that it was confined to our species. Instead, it was shaped by natural selection, probably largely during the last two million years. p. 226

Although recursion was critical to the evolution of the human mind…it is not a “module,” the name given to specific, innate functional units, many of which are said to have evolved during the Pleistocene. Nor did it depend on some specific mutation, or some special kind of neuron, or the sudden appearance of a new brain structure. Rather, recursion probably evolved through progressive increases in short-term memory and capacity for hierarchical organization. These in turn were probably dependent on brain size, which increased incrementally, albeit rapidly, during the Pleistocene. But incremental changes can lead to sudden more substantial jumps, as when water boils or a balloon pops. In mathematics, such sudden shifts are known as catastrophes, so we may perhaps conclude that emergence of the human mind was catastrophic. p. 222

I have argued…that the extension of recursive principles to manufacture and technology was made possible largely through changes in the way we communicate. Language evolved initially for the sharing of social and episodic information, and depended at first on mime, using bodily movements to convey meaning. Through conventionalization, communication became less mimetic and more abstract. In the course of time it retreated into the face and eventually into the mouth, as late Homo gained voluntary control over voicing and the vocal tract, and the recursive ability to create infinite meaning through combinations of articulate sounds. This was an exercise in miniaturization, releasing the rest of the body, as well as recursive principles, for manipulation of the physical environment.

The complexities of the modern world are not of course the product of individual minds. Rather, they are the cumulative products of culture. Most of us have no idea how a jet engine, or a computer, or even a lightbulb, actually works. We all stand on the shoulders of giants…pp. 223-224

This concludes my review of The Recursive Mind by Michael C. Corballis. I hope you’ve enjoyed it and learned something new along the way.

The Recursive Mind (Review) – 4

3. Theory of Mind

Now, I know that you’re thinking. All this stuff about recursion and Julian Janes is a little bit tedious. I’m not interested at all. Why does he keep talking about this stuff, anyway? Jayne’s ideas are clearly preposterous–only an idiot would even consider them. I should quit reading, or maybe head over to Slate Star Codex or Ran Prieur, or maybe Reddit or Ecosophia or Cassandra’s Legacy or…

How do I know what you’re thinking (correctly or not)? It’s because I have a Theory of Mind (ToM), which allows me to imagine and anticipate what other people are thinking. So do you most likely, which is why you can detect a degree of self-deprecation in my statements above.

Theory of mind is the ability to infer the mental states of other people. It’s often referred to a a sort of “mind-reading.” Daniel Dennett called it the “intentional stance,” meaning that we understand that other people have intentions and motivations that are different from our own. It evolved because we have lived in complex societies that require cooperation and intelligence for millions of years. “According to the intentional stance, we interact with people according to what we think is going on in their minds, rather than in terms of their physical attributes…” (p 137)

The lack of understanding of other people’s perspectives is what Jean Piaget noticed most in children. Central to many of his notions is the idea that children are egocentric, where their own needs and desires are all that exists: “During the earliest stages the child perceives things like a solipsist who is unaware of himself as subject and is familiar only with his own actions.” In other words, the child is unable to recognize that other people have thoughts or feeling different from (or even in conflict with) their own. They are also unaware that others cannot see the same thing that they do. One way to test theory of mind in children is called the Sally-Anne test:

Click to enlarge. Then hit “back”

Theory of mind is also something that helps teach and learn. In order for me to effectively teach you, I need to have some idea of what you’re thinking so I can present the material in a way you can understand it. And, or course, you need to have some idea of what’s going on in my mind to understand what I’m trying to teach you. Theory of mind, therefore, is related to cultural transmission (or, more precisely, memetics). Human culture plays such an outsize role in our behavior partly because of our theory of mind. Theory of mind is a also a recursive operation which involves embedding your consciousness into someone else’s conscious mind:

From the point of view of this book, the important aspect of theory of mind is that it is recursive. This is captured by the different orders of intentionality… Zero-order intentionality refers to actions or behaviors that imply no subjective state, as in reflex or automatic acts. First-order intentionality involves a single subjective term, as in Alice wants Fred to go away. Second-order intentionality would involve two such terms, as in Ted thinks Alice wants Fred to go away. It is at this level that theory of mind begins.

And so on to third order: Alice believes that Fred thinks she wants him to go away. Recursion kicks in once we get beyond the first order, and our social life is replete with such examples. There seems to be some reason to believe, though, that we lose track at about the fifth or sixth order, perhaps because of limited working memory capacity rather than any intrinsic limit on recursion itself. We can perhaps just wrap our minds around propositions like: Ted suspects that Alice believes that he does indeed suspect that Fred thinks that she wants him (Fred) to go away. That’s fifth order, as you can tell by counting the words in bold type. You could make it sixth order by adding ‘George imagines that…’ at the beginning. p. 137

Clearly, higher orders of intentionality have been driven by the demands of the social environment one finds oneself in. I will later argue that these higher-order intentionalities developed when we moved to environments where the challenges we faced were predominantly natural (finding food, escaping predators, etc.), to one where the challenges were primarily social (managing workers, finding mates, leading armies, long-distance trading, negotiating debts, etc.). This change resulted in a fundamental remodeling of the human brain after settled civilization which allowed us to function in such social environments, probably by affecting the action of our serotonin receptors. We’ll get to that later.

Do you see what she sees?

Its not only one’s mental perspective, but even one’s physical perspective that ToM can let us take:

Whether instinctive or learned, the human ability to infer the mental states of others goes well beyond the detection of emotion. To take another simple and seemingly obvious example, we can understand what another individual can see. This is again an example of recursion, since we can insert that individual’s experience into our own. It is by no means a trivial feat, since it requires the mental rotation and transformation of visual scenes to match what the other person can see, and the construction of visual scenes that are not immediately visible.

For example, if you are talking to someone face-to-face, you know that she can see what is behind you, though you can’t. Someone standing in a different location necessarily see the world from a different angle, and to understand that person’s view requires and act of mental rotation and translation. pp. 134-135

I suspect this ability has something to do with out-of-body experiences, where we “see” ourselves from the perspective of somewhere outside our bodies. Recall Jaynes’s point that the “self” is not truly located in anywhere in physical space–including behind the eyes. Thus our “self” can theoretically locate itself anywhere, including the ceiling of our hospital room when we are dying.

Not everyone has theory of mind, though, at least not to the same degree. One of the defining characteristics of the autism spectrum is difficulty with ToM. Autistic people tend to not be able to infer what others are thinking, and this leads to certain social handicaps. Corballis makes a common distinction between “people-people” (as in, I’m a “people-person”–avoid anyone who describes themselves this way), and “things-people”, exemplified by engineers, doctors, scientists, programmers, professors, and such-like. “People-people” typically have a highly-developed ToM, which facilitates their feral social cunning. Technically-minded people, by contrast, often (though not always) have a less-developed theory of mind, as exemplified by this quote from the fictional Alan Turing in The Imitation Game: “When people talk to each other, they never say what they mean. They say something else and you’re expected to just know what they mean.”

Research has found autistic people who ace intelligence tests may still have trouble navigating public transportation or preparing a meal. Scoring low on a measure of social ability predicts an incongruity between IQ and adaptive skills. (Reddit)

One fascinating theory of autism that Corballis describes is based on a conflict between the mother’s and the father’s genes imprinting on the developing fetus in the womb:

In mammalian species, the only obligatory contribution of the male to the offspring is the sperm, and the father relies primarily on his genes to influence the offspring to behave in ways that support his biological interest.

Paternal genes should therefore favor self-interested behavior in the offspring, drawing on the mother’s resources and preventing her from using resources on offspring that might have been sired by other fathers. The mother, on the other hand, has continuing investment in the child both before birth…and after birth…Maternal genes should therefore operate to conserve her resources, favoring sociability and educability—nice kids who go to school and do what they’re told.

Maternal genes are expressed most strongly in the cortex, representing theory of mind, language, and social competence, whereas paternal genes tend to be expressed more in the limbic system, which deals with resource-demanding basic drives, such as aggression, appetites, and emotion. Autism, then, can be regarded as the extreme expression of paternal genes, schizophrenia as the extreme expression of maternal genes.

Many of the characteristics linked to the autistic and psychotic spectra are physical, and can be readily understood in terms of the struggle for maternal resources. The autistic spectrum is associated with overgrowth of the placenta, larger brain size, higher levels of growth factors, and the psychotic spectrum with placental undergrowth, smaller brain size, and slow growth…

Imprinting may have played a major role in human evolution. One suggestion is that evolution of the human brain was driven by the progressive influence of maternal genes, leading to expansion of the neocortex and the emergence of recursive cognition, including language and theory of mind. The persisting influence of paternal genes, though, may have preserved the overall balance between people people and things people, while also permitting a degree of difference.

Simon Baron-Cohen has suggested that the dimension can also be understood along an axis of empathizers versus systemizers. People people tend to empathize with others, through adopting the intentional stance and the ability to take the perspective of others. Things people may excel at synthesizing, through obsessive attention to detail and compulsive extraction of rules… pp. 141-142

I think this partly explains the popularity of libertarian economics among a certain set of the population, especially in Silicon Valley where people high on the autism spectrum tend to congregate. They tend to treat people as objects for their money-making schemes. They are unable to understand that people are not rational robots, and thus completely buy into the myth of Homo economocus. Their systemizing brains tend to see the Market as a perfect, frictionless, clockwork operating system (if only government “interference” would get out of the way, that is). It also explains why they feel nothing toward the victims of their “creative destruction.” It’s notable that most self-described libertarians tend to be males (who are often more interested in “things” and have a less developed theory of mind in general). In addition, research has shown that people who elect to study economics professionally have lower levels of empathy than the general population (who then shape economic theory to conform to their beliefs). This should be somewhat concerning, since economics, unlike physics or chemistry or meteorology, concerns people.

This sort of calculating self-centered hyper-rationality also lays behind the capitalist ethos.

The dark side of theory of mind is, of course, the ability to manipulate others. This has been referred to as Machiavellian intelligence, after Niccolo Machiavelli, the Italian diplomat who wrote about how rulers can manipulate the ruled to keep them in awe and obedience. It is certain that Machiavelli had a well-developed theory of mind, because he wrote stuff like this: “Now, in order to execute a political commission well, it is necessary to know the character of the prince and those who sway his counsels; … but it is above all things necessary to make himself esteemed, which he will do if he so regulates his actions and conversation that he shall be thought a man of honour, liberal, and sincere…It is undoubtedly necessary for the ambassador occasionally to mask his game; but it should be done so as not to awaken suspicion and he ought also to be prepared with an answer in case of discovery.” (Wikiquote) . In fact, CEO’s and middle-managers tend to be consummate social manipulators—it’s been shown using psychological tests that successful CEO’s and politicians consistently score higher on traits of sociopathy than the general population.

There may be a dark side to social intelligence, though, since some unscrupulous individuals may take advantage of the cooperative efforts of others, without themselves contributing. These individuals are known as freeloaders. In order to counteract their behavior, we have evolved ways of detecting them. Evolutionary psychologists refer to a “cheater-detection module” in the brain that enables us to detect these imposters, but they in turn have developed more sophisticated techniques to escape detection.

This recursive sequence of cheater detection and cheater detection-detection has led to what has been called a “cognitive arms race,” perhaps first identified by the British evolutionary theorist Robert Trivers, and later amplified by other evolutionary psychologists. The ability to take advantage of others through such recursive thinking has been termed Machiavellian intelligence, whereby we use social strategies not merely to cooperate with our fellows, but also to outwit and deceive them…p. 136

It’s been argued (by me, for instance) that a hyperactive “cheater detection module,” often allied with lower levels of empathy, is what lays behind politically conservative beliefs. I would posit, too, that it also underlies many of the misogynistic attitudes among the so-called “Alt-Right”, since their theory of mind is too poorly developed to understand women’s thinking well enough to have positive interactions with them (instead preferring submission and obedience). A tendency toward poor ToM, in my opinion, explains a lot of seemingly unrelated characteristics of the Alt-right (economic libertarianism, misogyny, racism, technophilia, narcissism, atheism, hyper-rationality, ultra-hereditarianism, “political incorrectness” etc.)

Theory of mind appears to be more developed among women than men, probably because of their childrearing role. Many men can relate to the hyperactive tendency of their wives or girlfriends to “mind read” (“What are you thinking right now?”) and claim that they are correct in their inferences (“I know you’re thinking about your ex..!”).

Theory of Mind has long been seen as fundamental to the neuroscience of religious belief. The ability to attribute mental states to other people leads to attributing human-like attributes and consciousness to other creatures, and even things. I’ve you’ve ever hit your computer for “misbehaving” or kicked your car for breaking down on you, then you know what I’m talking about. The tendency to anthropomorphize is behind the misattribution of human traits and behaviors to non-human animals, viz:

According to Robin Dunbar, it is through Theory of Mind that people may have come to know God, as it were. The notion of a God who is kind, who watches over is, who punishes, who admits us to Heaven if we are suitably virtuous, depends on the underlying understanding that other beings—in this case a supposedly supernatural one—can have human-like thoughts and emotions.

Indeed Dunbar argues that several orders of intentionality may be required, since religion is a social activity, dependent on shared beliefs. The recursive loops that are necessary run something like this: I suppose that you think that I believe there are gods who intend to influence our futures because they understand our desires. This is fifth-order intentionality. Dunbar himself must have achieved sixth-order intentionality if he supposes all of this, and if you suppose that he does then you have reached seventh-order…

If God depends on theory of mind, so too, perhaps, does the concept of the self. This returns us to the opening paragraph of this book, and Descartes famous syllogism “I think , therefore I am.” since he was appealing to his own thought about thinking, this is second-order intentionality. Of course, we also understand the self to continue through time, which requires the (recursive) understanding that our consciousness also transcends the present. pp. 137-138 (emphasis mine)

Thus, higher-order gods tend to emerge at a certain point of socio-political complexity, where higher-order states of mind are achieved by a majority of people. A recent paper attempted to determine whether so-called “Moralizing High Gods” (MHG) and “Broad Supernatural Punishers” (BSP) is what allowed larger societies to form, or were rather the result of larger societies and the need to hold them together. The authors concluded the latter:

Do “Big Societies” Need “Big Gods”? (Cliodynamica)

Moralizing Gods as Effect, Not Cause (Marmalade)


Here’s evolutionary psychologist Robin Dunbar explaining why humans appear to be the only primates with the higher-order intentionality necessary to form Moralizing High Gods and Broad Supernatural Punishers:

We know from neuroimaging experiments that mentalizing competencies correlate with the volume of the mentalizing network in the brain, and especially with the volume of the orbitofrontal cortex, and this provides important support for the claim that, across primates, mentalizing competencies correlate with frontal lobe volume. Given this, we can…estimate the mentalizing competencies of fossil hominins, since they must, by definition, be strung out between the great apes and modern humans…As a group, the australopithecenes cluster nicely around second-order intentionality, along with other great apes; early Homo populations all sit at third-order intentionality, while archaic humans and Neanderthals can just about manage fourth order; only fossil [Anatomically Modern Humans] (like their living descendants) achieve fifth order. Human Evolution: Our Brains and Behavior by Robin Dunbar, p. 242

… The sophistication of one’s religion ultimately depends on the level of intentionality one is capable of. While one can certainly have religion of some kind with third or fourth order intentionality, there seems to be a real phase shift in the quality of religion that can be maintained once one achieves fifth order intentionality. Given that archaic humans, including Neanderthals, don’t appear to have been more than fourth order intentional, it seems unlikely that they would have had religions of very great complexity. Quite what that means remains to be determined, but the limited archaeological evidence for an active religious life among archaics suggests that, at best, it wasn’t very sophisticated. Human Evolution: Our Brains and Behavior by Robin Dunbar, pp. 285-286

A hyperactive Theory of Mind has long been suspected as playing a role in religious belief, as well as in schizophrenia, in which intentionality has run amok, leading to paranoia and hallucinations (objects talking to you, etc.):

One of the most basic insights of the cognitive science of religion is that religions the world over and throughout human history have reliably evolved so as to involve representations that engage humans’ mental machinery for dealing with the social world. After all, such matters enthrall human minds. The gods and, even more fundamentally, the ancestors are social agents too! On the basis of knowing that the gods are social actors, religious participants know straightaway that they have beliefs, intentions, feelings, preferences, loyalties, motivations, and all of the other states of mind that we recognize in ourselves and others.

What this means is, first, that religious participants are instantly entitled to all of the inferences about social relations, which come as defaults with the development of theory of mind, and, second, that even the most naïve participants can reason about them effortlessly. Such knowledge need not be taught. We deploy the same folk psychology that we utilize in human commerce to understand, explain, and predict the gods’ states of mind and behaviors.

How Religions Captivate Human Minds (Psychology Today)

What Religion is Really All About (Psychology Today)

Most potently for our discussion of Julian Jaynes’s theories is the fact that fMRI scans have shown that auditory hallucinations—of the type the Jaynes described as the basis of ancient belief in gods—activate brain regions associated with Theory of Mind. Here’s psychologist Charles Fernyhough:

…When my colleagues and I scanned people’s brains while they were doing dialogic inner speech, we found activation in the left inferior frontal gyrus, a region typically implicated in inner speech. But we also found right hemisphere activation close to a region known as the temporoparietal junction (TPJ)…that’s an area that is associated with thinking about other people’ minds, and it wasn’t activated when people were thinking monologically…Two established networks are harnessed for the purpose of responding to the mind’s responses in an interaction that is neatly cost-effective in terms of processing resources. Instead of speaking endlessly without expectation of an answer, the brain’s work blooms into dialogue… The Voices Within by Charles Fernyhough; pp. 107-108 (emphasis mine)

Theory of mind is also involved with the brain’s Default mode network (DMN), a pattern of neural activity that takes place during mind-wandering, and seems to be largely responsible for the creation of the “unitary self.” It’s quite likely that the perception of the inner-voice as belonging to another being with it’s own personality traits, as Jaynes described, activates our inbuilt ToM module, as do feelings of an “invisible presence” also reported by non-clinical voice hearers. This is from Michael Pollan’s book on psychedelic research:

The default mode network stands in a kind of seesaw relationship with the attentional networks that wake up whenever the outside world demands our attention; when one is active, the other goes quiet, and vice versa. But as any person can tell you, quite a lot happens in the mind when nothing much is going on outside us. (In fact, the DMN consumes a disproportionate share of the brain’s energy.) Working at a remove from our sensory processing of the outside world, the default mode is most active when we are engaged in higher-level “metacognitive” processes such as self-reflection, mental time travel, mental constructions (such as the self or ego), moral reasoning, and “theory of mind”—the ability to attribute mental states to others, as when we try to imagine “what it is like” to be someone else. All these functions belong exclusively to humans, and specifically to adult humans, for the default mode network isn’t operational until late in a child’s development. How To Change Your Mind by Michael Pollan; pp. 301-303

Theory of mind is also critical for signed and spoken language. After all, I need to have some idea what’s going on in your mind in order to get my point across. The more I can insert myself into your worldview, the more effectively I can tailor my language to communicate with you, dear reader. Hopefully, I’ve done a decent job, (if you didn’t leave after the first paragraph, that is!) It also encourages language construction and development. In our earlier example, one would hope that the understanding of metaphor is sufficient that we implicitly understand that inosculation does not literally involve things kissing each other!

There is evidence to believe that the development of theory of mind is closely intertwined with language development in humans. One meta-analysis showed a moderate to strong correlation (r = 0.43) between performance on theory of mind and language tasks. One might argue that this relationship is due solely to the fact that both language and theory of mind seem to begin to develop substantially around the same time in children (between ages 2–5). However, many other abilities develop during this same time period as well, and do not produce such high correlations with one another nor with theory of mind. There must be something else going on to explain the relationship between theory of mind and language.

Pragmatic theories of communication assume that infants must possess an understanding of beliefs and mental states of others to infer the communicative content that proficient language users intend to convey. Since a verbal utterance is often underdetermined, and therefore, it can have different meanings depending on the actual context. Theory of mind abilities can play a crucial role in understanding the communicative and informative intentions of others and inferring the meaning of words. Some empirical results suggest that even 13-month-old infants have an early capacity for communicative mind-reading that enables them to infer what relevant information is transferred between communicative partners, which implies that human language relies at least partially on theory of mind skills….

Theory of Mind (Wikipedia)

Irony, metaphor, humor, and sarcasm are all examples of how language and theory of mind are related. Irony involves a knowing contrast between what is said and what is meant, meaning that you need to be able to infer what another person was thinking. “Irony depends on theory of mind, the secure knowledge that the listener understands one’s true intent. It is perhaps most commonly used among friends, who share common attitudes and threads of thought; indeed it has been estimated that irony is used in some 8 percent of conversational exchanges between friends.” (pp. 159-160) Sarcasm also relies on understanding the difference between what someone said and what they meant. I’m sure you’ve experienced an instance when someone writes some over-the-top comment on an online forum intended to sarcastically parody a spurious point of view, and some reader takes it at face value and loses their shit. It might be because we can’t hear the tone of voice or see the body language of the other person, but I suspect it also has something to do with the high percentage of high-spectrum individuals who frequent such message boards.

Metaphor, too relies on a non-literal understanding of language. If the captain calls for “all hands on deck,” it is understood that he wants more than just our hands, and that we aren’t supposed to place our hands down on the deck. If it’s “raining cats and dogs,” most of us know that animals are not falling out of the sky. And if I advise you to “watch your head,” you know to look out for low obstructions and not have an out-of-body experience. Which reminds me, humor also relies on ToM.

Theory of mind allows for normal individuals to use language in a loose way that tends not to be understood by those with autism. Most of us, if asked the question “Would you mind telling me the time?” would probably answer with the time, but an autistic individual would be more inclined to give a literal answer, which might be something like “No, I don’t mind.” Or if you ask someone whether she can reach a certain book, you might expect her to reach for the book and hand it to you, but an autistic person might simply respond yes or no. This reminds me that I once made the mistake of asking a philosopher, “Is it raining or snowing outside?”–wanting to know whether I should grab an umbrella or a warm coat. He said, “Yes.” Theory of mind allows is to use our language flexible and loosely precisely because we share unspoken thoughts, which serve to clarify or amplify the actual spoken message. pp. 160-161

If you do happen to be autistic, and all the stuff I just said goes over your head, don’t fret. I have enough theory of mind to sympathize with your plight. Although, if you are, you might more easily get this old programmer joke:

A programmer is at work when his wife calls and asks him to go to the store. She says she needs a gallon of milk, and if they have fresh eggs, buy a dozen. He comes home with 12 gallons of milk.

The relationship between creativity, mechanical aptitude, genius, and mental illness is complex and poorly understood, but has been a source of fascination for centuries. Often times creative people were thought to be “possessed” by something outside of their own normal consciousness or abilities:

Recent evidence suggests that a particular polymorphism on a gene known to be related to the risk of psychosis is also related to creativity in people with high intellectual achievement.

The tendency to schizophrenia or bipolar disorder may underlie creativity in the arts, as exemplified by musicans such as Bela Bartok, Ludwig van Beethoven, Maurice Ravel, or Peter Warlock, artists such as Amedeo Clemente Modigliani, Maurice Utrillo, or Vincent van Gogh, and writers such as Jack Kerouac, D. H. Lawrence, Eugene O’Neill, or Marcel Proust. The esteemed mathematician John Forbes Nash, subject of the Hollywood movie A Beautiful Mind, is another example. The late David Horrobin went so far as to argue that people with schizophrenia were regarded as the visionaries who shaped human destiny itself, and it was only with the Industrial Revolution, and a change m diet, that schizophrenics were seen as mentally ill. p. 143

Horrobin’s speculations are indeed fascinating, and only briefly alluded to in the text above:

Horrobin…argues that the changes which propelled humanity to its current global ascendancy were the same as those which have left us vulnerable to mental disease.

‘We became human because of small genetic changes in the chemistry of the fat in our skulls,’ he says. ‘These changes injected into our ancestors both the seeds of the illness of schizophrenia and the extraordinary minds which made us human.’

Horrobin’s theory also provides support for observations that have linked the most intelligent, imaginative members of our species with mental disease, in particular schizophrenia – an association supported by studies in Iceland, Finland, New York and London. These show that ‘families with schizophrenic members seem to have a greater variety of skills and abilities, and a greater likelihood of producing high achievers,’ he states. As examples, Horrobin points out that Einstein had a son who was schizophrenic, as was James Joyce’s daughter and Carl Jung’s mother.

In addition, Horrobin points to a long list of geniuses whose personalities and temperaments have be-trayed schizoid tendencies or signs of mental instability. These include Schumann, Strindberg, Poe, Kafka, Wittgenstein and Newton. Controversially, Horrobin also includes individuals such as Darwin and Faraday, generally thought to have displayed mental stability.

Nevertheless, psychologists agree that it is possible to make a link between mental illness and creativity. ‘Great minds are marked by their ability to make connections between unexpected events or trends,’ said Professor Til Wykes, of the Institute of Psychiatry, London. ‘By the same token, those suffering from mental illness often make unexpected or inappropriate connections between day-to-day events.’

According to Horrobin, schizophrenia and human genius began to manifest themselves as a result of evolutionary pressures that triggered genetic changes in our brain cells, allowing us to make unexpected links with different events, an ability that lifted our species to a new intellectual plane. Early manifestations of this creative change include the 30,000-year-old cave paintings found in France and Spain…

Schizophrenia ‘helped the ascent of man’ (The Guardian)

Writers May Be More Likely to Have Schizophrenia (PsychCentral)

The link between mental illness and diet is intriguing. For example, the popular ketogenic diet was originally developed not to lose weight, but to treat epilepsy! And, remarkably, a recent study has show that a ketogenic diet has caused remission of long-standing schizophrenia in certain patients. Recall that voice-hearing is a key symptom of schizophrenia (as well as some types of epilepsy). Was a change in diet partially responsible for what Jaynes referred to as bicameralism?

The medical version of the ketogenic diet is a high-fat, low-carbohydrate, moderate-protein diet proven to work for epilepsy. …While referred to as a “diet,” make no mistake: this is a powerful medical intervention. Studies show that over 50 percent of children with epilepsy who do not respond to medications experience significant reductions in the frequency and severity of their seizures, with some becoming completely seizure-free.

Using epilepsy treatments in psychiatry is nothing new. Anticonvulsant medications are often used to treat psychiatric disorders. Depakote, Lamictal, Tegretol, Neurontin, Topamax, and all of the benzodiazepines (medications like Valium and Ativan, commonly prescribed for anxiety) are all examples of anticonvulsant medications routinely prescribed in the treatment of psychiatric disorders. Therefore, it’s not unreasonable to think that a proven anticonvulsant dietary intervention might also help some people with psychiatric symptoms.

Interestingly, the effects of this diet on the brain have been studied for decades because neurologists have been trying to figure out how it works in epilepsy. This diet is known to produce ketones which are used as a fuel source in place of glucose. This may help to provide fuel to insulin resistant brain cells. This diet is also known to affect a number of neurotransmitters and ion channels in the brain, improve metabolism, and decrease inflammation. So there is existing science to support why this diet might help schizophrenia.

Chronic Schizophrenia Put Into Remission Without Medication (Psychology Today)

4. Kinship

The Sierpinski triangle provides a good model for human social organization

Although not discussed by Corballis, kinship structures are also inherently recursive. Given that kinship structures form the primordial organizational structure for humans, this is another important feature of human cognition that appears to derive from our recursive abilities. For a description of this, we’ll turn once again to Robin Dunbar’s book on Human Evolution. Dunbar (of Dunbar’s number fame) makes the case that the ability to supply names of kin members may be the very basis for spoken language itself!

There is one important aspect of language that some have argued constitutes the origin of language itself – the naming of kin.

There is no particular reason to assume that ability to name kin relationships was in any way ancestral, although it may well be the case that naming individuals appeared very early. One the other hand, labeling kinship categories (brother, sister, grandfather, aunt, cousin) is quite sophisticated: it requires us to make generalizations and create linguistic categories. And it probably requires us to be able to handle embeddedness, since kinship pedigrees are naturally embedded structures.

Kinship labels allow is to sum in a single word the exact relationship between two individuals. The consensus among anthropologists is that there are only about six major types of kinship naming systems – usually referred to as Hawaiian, Eskimo, Sudanese, Crow, Omaha and Iroquois after the eponymous tribes that have these different kinship naming systems. They differ mainly in terms of whether they distinguish parallel from cross cousins and whether descent is reconed unilaterally or bilaterally.

The reasons why these naming systems differ have yet to be explained satisfactorally. Nonetheless, given that one of their important functions is to specify who can marry whom, it is likley that they reflect local variations in mating and inheritance patterns. The Crow and Omaha kinship naming systems, for example, are mirror images of each and seem to be a consequence of differing levels of paternity certainty (as a result, one society is patrilineal, the other matrilineal). Some of these may be accidents of cultural history, while others may be due to the exigencies of the local ecology. Kinship naming systems are especially important, for example, when there are monpolizable resources like land that can be passed on from one generation to the next and it becomes crucial to know just who is entitled, by descent, to inherit. Human Evolution: Our Brains and Behavior, by Robin Dunbar; pp. 272-273

Systems of kinship appear to be largely based around the means of subsistance and rules of inheritance. Herders, for example, tend to be patriarchal, and hence patrilineal. The same goes for agrarian societies where inheritance of arable land is important. Horticultural societies, by contrast, are often more matrilineal, reflecting women’s important role in food production. Hunter-gatherers, where passing down property is rare, are often bilateral. These are, of course, just rules of thumb. Sometimes tribes are divided into two groups, which anthropolgists call moieties (from the French for “half”), which are designed to prevent inbreeding (brides are exchanged exclusively across moieties).

Anthropologists have sometimes claimed that biology cannot explain human kinship naming systems because many societies classify biologically unrelated individuals as kin. This is a specious argument for two separate reasons. One is that the claim is based on a naive understanding of what biological kinship is all about.

This is well illustrated by how we treat in-laws. In English, we classify in-laws (who are biologically unrelated to us) using the same kin terms that we use for real biological relatives (father-in-law, sister-in-law, etc.). However…we actually treat them, in emotional terms, as though they were real biological kin, and we do so for a very good biological reason: they share with us a common genetic interest in the next generation.

We tend to think of genetic relatedness as reflecting past history (i.e. how two people are related in a pedigree that plots descent from some common ancestor back in time). But in fact, biologically speaking, this isn’t really the issue, although it is a convenient approximation for deciding who is related to whom. In an exceptionally insightful but rarely appreciated book (mainly because it is very heavy on maths), Austen Hughes showed that the real issue in kinship is not relatedness back in time but relatedness to future offspring. In-laws have just as much stake in the offspring of a marriage as any other relative, and hence should be treated as though they are biological relatives. Hughes showed that this more sophisticated interpretation of biological relatedness readily explains a large number of ethnographic examples of kinship naming and co-residence that anthropologists have viewed as biologically inexplicable. Human Evolution: Our Brains and Behavior, by Robin Dunbar; pp. 273-277

As a sort of proof of this, many of the algorithms that have been developed to determine genetic relatedness between individuals (whether they carry the same genes) are recursive! It’s also notable that the Pirahã, whose language allegedly does not use recursion, also do not have extended kinship groups (or ancestor worship or higher-order gods for that matter. In fact, they are said to live entirely in the present, meaning no mental time travel either).

Piraha Indians, Recursion, Phonemic Inventory Size and the Evolutionary Significance of Simplicity (Anthropogenesis)

The second point is that in traditional small-scale societies everyone in the community is kin, whether by descent or by marriage; those fewwho aren’t soon become so by marrying someone or by being given some appropriate status as fictive or adoptive kin. The fact that some people are misclassified as kin or a few strangers are granted fictional kinship status is not evidence that kinship naming systems do not follow biological principles: a handful of exceptions won’t negate the underlying evolutionary processes associated with biological kinship, not least because everything in biology is statistical rather than absolute. One would need to show that a significant proportion of naming categories cross meaningful biological boundaries, but in fact they never do. Adopted children can come to see their adoptive parents as their real parents, but adoption itself is quite rare; moreover, when it does occur in traditional societies it typically involves adoption by relatives (as anthropological studies have demonstrated). A real sense of bonding usually happens only when the child is very young (and even then the effect is much stronger for the child than for the parents – who, after all, know the child is not theirs).

Given that kinship naming systems seem to broadly follow biological categories of relatedness, a natural assumption is that they arise from biological kin selection theory… It seems we have a gut response to help relatives preferentially, presumably as a consequence of kin selection…Some of the more distant categories of kin (second and third cousins, and cousins once removed, as well as great-grandparents and great-great-grandparents) attract almost as strong a response from us as close kin. Yet these distant relationships are purely linguistic categories that someone has labelled for us (‘Jack is your second cousin -you share a great-grandmother’). The moment you are told that somebody is related to you, albeit distantly, it seems to place them in a very different category from mere friends, even if you have never met them before…You only need to know one thing about kin- that they are related to us ( and maybe exactly how closely they are related) whereas with a friend we have to track back through all the past interactions to decide how they actually behaved on different occasions. Because less processing has to be done, decisions about kin should be done faster and at less cognitive cost than decisions about unrelated individuals. This would imply that, psychologically, kinship is an implicit process (i.e. it is automated), whereas friendship is an explicit process (we have to think about it)…

It may be no coincidence that 150 individuals is almost exactly the number of living descendants (i.e. members of the three currently living generations: grandparents, parents and children) of a single ancestral pair two generations back (i.e. the great-great-grandparents) in a society with exogamy (mates of one sex come from outside the community, while the other sex remains for life in the community into which it was born). This is about as far back as anyone in the community can have personal knowledge about who is whose offspring so as to be able to vouch for how everyone is related to each other. It is striking that no kinship naming system identifies kin beyond this extended pedigree with its natural boundary at the community of 150 individuals. It seems as though our kinship naming systems may be explicitly designed to keep track of and maintain knowledge about the members of natural human communities. Human Evolution: Our Brains and Behavior, by Robin Dunbar; pp. 273-277

Corballis concludes:

Recursion, then, is not the exclusive preserve of social interaction. Our mechanical world is as recursively complex as is the social world. There are wheels within wheels, engines within engines, computers within computers. Cities are containers built of containers within containers, going right down, I suppose, to handbags and pockets within our clothing. Recursive routines are a commonplace in computer programming, and it is mathematics that gives us the clearest idea of what recursion is all about. But recursion may well have stemmed from runaway theory of mind, and been later released into the mechanical world. p. 144

In the final section of The Recursive Mind, Corballis takes a quick tour through human evolution to see when these abilities may have first emerged. That’s what we’ll take a look at in our last installment of this series.

The Recursive Mind (Review) – 3

Part 1

Part 2

2. Mental Time Travel

The word “remembering” is used loosely and imprecisely. There are actually multiple different types of memory; for example, episodic memory and semantic memory.

Episodic memory: The memory of actual events located in time and space, i.e “reminiscing.”

Semantic memory: The storehouse of knowledge that we possess, but which does not involve any kind of conscious recollection.

Semantic memory refers to general world knowledge that we have accumulated throughout our lives. This general knowledge (facts, ideas, meaning and concepts) is intertwined in experience and dependent on culture.

Semantic memory is distinct from episodic memory, which is our memory of experiences and specific events that occur during our lives, from which we can recreate at any given point. For instance, semantic memory might contain information about what a cat is, whereas episodic memory might contain a specific memory of petting a particular cat.

We can learn about new concepts by applying our knowledge learned from things in the past. The counterpart to declarative or explicit memory is nondeclarative memory or implicit memory.

Semantic memory (Wikipedia)

Episodic memory is essential for creating of the narrative self. Episodic memory takes various forms, for example:

Specific events: When you first set foot in the ocean.

General events: What it feels like stepping into the ocean in general. This is a memory of what a personal event is generally like. It might be based on the memories of having stepped in the ocean, many times during the years.

Flashbulb memories: Flashbulb memories are critical autobiographical memories about a major event.

Episodic Memory (Wikipedia)

For example, if you are taking a test for school, you are probably not reminiscing about the study session you had the previous evening, or where you need to be the next class period. You are probably not thinking about your childhood, or about the fabulous career prospects that are sure to result from passing this test. Those episodic memories—inserting yourself into past or future scenarios—would probably be a hindrance from the test you are presently trying to complete. Semantic memory would be what you are drawing upon to answer the questions (hopefully correctly).


It is often difficult to distinguish between one and the other. Autobiographical memories are often combinations of the two—lived experience combined with autobiographical stories and family folklore. Sometimes, we can even convince ourselves that things that didn’t happen actually did (false memories). Our autobiographical sense of self is determined by this process.

Endel Tulving has described remembering as autonoetic, or self-knowing, in that one has projected one’s self into the past to re-experience some earlier episode. Simply knowing something, like the boiling point of water, is noetic, and implies no shift of consciousness. Autoneotic awareness, then, is is recursive, in that one can insert previous personal experience into present awareness. This is analogous to the embedding of phrases within phrases, or sentences within sentences.

Deeper levels of embedding are also possible, as when I remember yesterday that I had remember yesterday that I had remembered an event that occurred at some earlier time. Chunks of episodic awareness can thus be inserted into each other in recursive fashion. Having coffee at a conference recently, I was reminded of an earlier conference where I managed to spill coffee on a distinguished philosopher. This is memory of a memory of an event. I shall suggest later that this kind of embedding may have set the state for the recursive structure of language itself (p. 85) [Coincidentally, as I was typing this paragraph, I spilled coffee on the book. Perhaps you will spill coffee on your keyboard while reading this. – CH]

Corballis mentions that case of English musician Clive Wearing, whose hippocampus was damaged leading to anteriograde and retrograde amnesia. At the other end of the spectrum is the Russian Solomon Shereshevsky.

The Abyss (Oliver Sacks, The New Yorker)

Long-term memory can further be subdivided into implicit memory and explicit (or declarative) memory.

“Implicit memories are elicited by the immediate environment, and do not involve consciousness or volition.” (p. 98) … Implicit memory…enables us to learn without any awareness that we are doing so. It is presumably more primitive in an evolutionary sense than is explicit memory, which is made up of semantic and episodic memory. Explicit memory is sometimes called declarative memory because it is the kind of memory we can talk about or declare.

Implicit memory does not depend on the hippocampus, so amnesia resulting from hippocampal damage does not entirely prevent adaptation to new environments or condition, but such adaptation does not enter consciousness. p. 88 (emphasis mine)
Explicit memories, by contrast, “provide yet more adaptive flexibility, because it does not depend on immediate evocation from the environment” p. 98 (emphasis mine)

The textbook case of implicit memory is riding a bicycle. You don’t think about, or ponder how to do it, you just do it. No amount of intellectual thought and pondering and thinking through your options will help you to swim or ride a bike or play the piano. When a line drive is hit to the shortstop, implicit memory, not explicit memory catches the ball (although the catch might provide a nice explicit memory for the shortstop later on). A daydreaming shortstop wold miss the ball completely.

Words are stored in semantic memory, and only rarely or transiently in episodic memory. I have very little memory of the occasions on which I learned the meanings of the some 50,000 words that I know–although I can remember occasionally looking up obscure words that I didn’t know, or that had escaped my semantic memory. The grammatical rules by which we string words together may be regarded as implicit rather than explicit memory, as automatic, perhaps, as riding a bicycle. Indeed, so automatic are the rules of grammar that linguists have still not been able to elaborate all of them explicitly. p. 126 (emphasis mine)

Operant conditioning (also called signal learning, solution learning, or instrumental learning) is another type of learning that does not require conscious, deliberative thought. It is a simple stimulus and response. You touch the stove, and you know the stove is hot. There was no thinking involved when Pavlov’s dogs salivated at the sound of a bell, for example. In a very unethical experiment, the behaviorist John B. Watson took a nine-month old orphan and conditioned him to be afraid of rats, rabbits, moneys, dogs and masks. He did this by making a loud, sharp noise (banging a metal bar with a hammer), which the child was afraid of, whenever the child was presented with those things. By associating the sound with the stimulus, he was able to induce a fear of those items. But there was no volition; no conscious thought was involved in this process. It works the same way on dogs, rabbits, humans or fruit flies. Behvariorism tells us next to nothing about human consciousness, or what makes us different.

These types of conditioning may be said to fall under the category of implicit memory. As we have seen, implicit memory may also include the learning of skills and even mental strategies to cope with environmental challenges. Implicit memories are elicited by the immediate environment, and do not involve consciousness or volition. Of course, one may remember the experience of learning to ride a bicycle, but that is distinct from the learning itself…These are episodic memories, independent of the process of actually learning (more or less) to ride the bike. p. 98 (emphasis mine, italics in original)

This important distinction is what is behind Jaynes’s declaration that learning and remembering do not require consciousness. Implicit memory and operant conditioning do not require the kind of deliberative self-consciousness or “analog I” that Jaynes described. Even explicit memory—the ability to recall facts and details, for example—does not, strictly speaking, require deliberative self-consciousness. Clive Wearing, referred to above, could still remember how to play the piano, despite living in an “eternal present.” Thus, it is entirely possible that things such as ruminative self-consciousness emerged quite late in human history. Jaynes himself described why consciousness (as distinct from simply being functional and awake) is not required for learning, and can even be detrimental to it.

In more everyday situations, the same simple associative learning can be shown to go on without any consciousness that it has occurred. If a distinct kind of music is played while you are eating a particularly delicious lunch, the next time you hear the music you will like its sounds slightly more and even have a little more saliva in your mouth. The music has become a signal for pleasure which mixes with your judgement. And the same is true for paintings. Subjects who have gone through this kind of test in the laboratory, when asked why they liked the music or paintings better after lunch, could not say. They were not conscious they had learned anything. But the really interesting thing here is that if you know about the phenomenon beforehand and are conscious of the contingency between food and the music or painting, the learning does not occur. Again, consciousness reduces our learning abilities of this type, let alone not being necessary for them…

The learning of complex skills is no different in this respect. Typewriting has been extensively studied, it generally being agreed in the worlds of one experimenter “that all adaptations and short cuts in methods were unconsciously made, that is, fallen into by the learners quite unintentionally.” The learners suddenly noticed that they were doing certain parts of the work in a new and better way.

Another simple experiment can demonstrate this. Ask someone to sit opposite you and to say words, as many words as he can think of, pausing two or three seconds after each of them for you to write them down. If after every plural noun (or adjective, or abstract word, whatever you choose) you say “good” or “right” as you write it down, or simply “mmm-hmm” or smile, or repeat the plural word pleasantly, the frequency of plural nouns (or whatever) will increase significantly as he goes on saying the words. The important thing here is that the subject is not aware that he is learning anything at all. He is not conscious that he is trying to find a way to make you increase your encouraging remarks, or even of his solution to that problem. Every day, in all our conversations, we are constantly training and being trained by each other in this manner, and yet we are never conscious of it. OoCitBotBM; pp. 33-35

But we not only use our memory to recall past experiences, we also think about future events as well, and this is based on the same ability to mentally time travel. It may seem paradoxical to think of memory as having anything to do with events that haven’t happened yet, but brain scans show that similar areas of the brain are activated when recalling past events and envisioning future ones—particularly the prefrontal cortex, but also parts of the medial temporal lobe. There is slightly more activity in imagining future events, probably due to the increased creativity required of this activity.


In this ability to mentally time travel we seem to be unique among animals, at least at to the extent that we do it and our abilities to do so:

So far, there is little convincing evidence that animals other than humans are capable of mental time travel—or if they are, their mental excursions into past or future have little of the extraordinary flexibility and broad provenance that we see in our own imaginative journeys. The limited evidence from nonhuman animals typically comes from behaviors that are fundamentally instinctive, such as food caching or mating, whereas in humans mental time travel seems to cover all aspects of our complex lives. p. 112

Animals Are ‘Stuck In Time’ With Little Idea Of Past Or Future, Study Suggests (Science Daily)

However, see: Mental time-travel in birds (Science Daily)

We are always imagining and anticipating, from thinking about events later the same day, or perhaps years from now. Even in a conversation, we are often planning what we are about to say, rather than focusing on the conversation itself. That is, we are often completely absent in the present moment, which is something that techniques like mindfulness meditation are designed to mitigate. We can even imagine events after we are dead, and it has been argued that this knowledge lays behind many unique human behaviors such as the notion of an afterlife and the idea of religion more generally. The way psychologists study this is to use implicit memory (as described above) to remind people of their own mortality. This is done through a technique called priming:

Priming is remarkably resilient. In one study, for example, fragments of pictures were used to prime recognition of whole pictures of objects. When the same fragments were shown 17 years later to people who had taken part in the original experiment, they were able to write the name of the object associated with each fragment much more accurately than a control group who had not previously seen the fragments. p. 88

When primed with notions of death and their own mortality, it has been shown that people in general are more authoritarian, more aggressive, more hostile to out-groups and simultaneously more loyal to in-groups. Here’s psychologist Sheldon Solomon describing the effect in a TED Talk:

“Studies show that when people are reminded of their mortality, for example, by completing a death anxiety questionnaire, or being interviewed in front of a funeral parlor, or even exposed to the word ‘death’ that’s flashed on a computer screen so fast—28 milliseconds—that you don’t know if you’ve even seen anything—When people are reminded of their own death, Christians, for example, become more derogatory towards Jews, and Jews become more hostile towards Muslims. Germans sit further away from Turkish people. Americans who are reminded of death become more physically aggressive to other Americans who don’t share their political beliefs. Iranians reminded of death are more supportive of suicide bombing, and they’re more willing to consider becoming martyrs themselves. Americans reminded of their mortality become more enthusiastic about preemptive nuclear, chemical and biological attacks against countries who pose no direct threat to us. So man’s inhumanity to man—our hostility and disdain toward people who are different—results then, I would argue, at least in part from our inability to tolerate others who do not share the beliefs that we rely on to shield ourselves from mortal terror.”

Humanity at the Crossroads (YouTube)

One important aspect of episodic memory is that it locates events in time. Although we are often not clear precisely when remembered events happened, we usually have at least a rough idea, and this is sufficient to give rise to the general understanding of time itself. It appears that locating events in time and in space are related.

Episodic memory allows us to travel back in time, and consciously relive previous experiences. Thomas Suddendorf called this mental time travel, and made the important suggestion that mental time travel allows us to imagine future events as well as remember past ones. It also adds to the recursive possibilities; I might remember, for example, that yesterday I had plans to go to the beach tomorrow.The true significance of episodic memory, then is that it provides a vocabulary from which to construct future events, and so fine-tune our lives.

What has been termed episodic future thinking, or the ability to imagine future events, emerges in children at around the same time as episodic memory itself, between the ages of three and four. Patients with amnesia are as unable to answer questions about past events as they are to say what might happen in the future… p. 100

Once again, the usefulness of this will be determined by the social environment. I will argue later that this ability to mentally time travel, as with the ability to “read minds” (which we’ll talk about next) became more and more adaptive over time as societies became more complex. For example, it would play little to no role among immediate return hunter gatherers (such as the Pirahã), who live mostly in the present and do not have large surpluses. Among delayed return hunter gatherers and horticulturalists, however, it would play a far larger role.

When we get to complex foragers and beyond, however, the ability to plan for the future becomes almost like a super-power. And here, we see a connection I will make between recursion and the Feasting Theory we’ve previously discussed. Simply put, an enhanced sense of future states allows one to more effectively ensnare people in webs of debt and obligation, which can then be leveraged to gain wealth and social advantage. I will argue that this is what allowed the primordial inequalities to form in various societies which could produce surpluses of wealth. It also demonstrates the evolutionary advantages of recursive thinking.

Corballis then ties together language and mental time travel. He posits that the recursive nature of language evolved specifically to allow us to share past and future experiences. It allows us to narratize our lives, and to tell that story to others, and perhaps more importantly, to ourselves.

Language allows us to construct things that don’t exist—shared fictions. It allows us to tell fictional stories of both the past and the future.

Episodic memories, along with combinatorial rules, allow us not only to create and communicate possible episodes in the future, but also to create fictional episodes. As a species, we are unique in telling stories. Indeed the dividing line between memory and fiction is blurred; every fictional story contains elements of memory, and memories contain elements of fiction…Stories are adaptive because they allow us to go beyond personal experience to what might have been, or to what might be in the future. They provide a way of stretching and sharing experiences so that we are better adapted to possible futures. Moreover, stories tend to become institutionalized, ensuring that shared information extends through large sections of the community, creating conformity and social cohesion. p. 124

The main argument … is that grammatical language evolved to enable us to communicate about events that do not take place in the here and now. We talk about episodes in the past, imagined or planned episodes in the future, or indeed purely imaginary episodes in the form of stories. Stories may extend beyond individual episodes, and involve multiple episodes that may switch back and forth in time. The unique properties of grammar, then, may have originated in the uniqueness of human mental time travel…Thus, although language may have evolved, initially at least, for the communication of episodic information, it is itself a robust system embedded in the more secure vaults of semantic and implicit memory. It has taken over large areas of our memory systems, and indeed our brains. p. 126


The mental faculties that allow us to locate, sort and retrieve events in time, are apparently use the same ones that we use to locate things in space. Languages have verb tenses that describe when things took place (although a few languages lack this ability). The ability to range at will over past, present and future gave rise to stories, which are often the glue that holds societies together, such as origin stories or tales of distant ancestors. Is the image above truly about moving forward in space, or is it about something else? What does it mean to say things like we “move forward” after a tragedy?

Different sets of grid cells form different grids: grids with larger or smaller hexagons, grids oriented in other directions, grids offset from one another. Together, the grid cells map every spatial position in an environment, and any particular location is represented by a unique combination of grid cells’ firing patterns. The single point where various grids overlap tells the brain where the body must be…Since the grid network is based on relative relations, it could, at least in theory, represent not only a lot of information but a lot of different types of information, too. “What the grid cell captures is the dynamic instantiation of the most stable solution of physics,” said György Buzsáki, a neuroscientist at New York University’s School of Medicine: “the hexagon.” Perhaps nature arrived at just such a solution to enable the brain to represent, using grid cells, any structured relationship, from maps of word meanings to maps of future plans.

The Brain Maps Out Ideas and Memories Like Spaces (Quanta)

It is likely that a dog, or even a bonobo, does not tell itself an ongoing “story” of it’s life. It simply “is”. If we accept narratization as an important feature of introspective self-consciousness, then we must accept the ability to tell ourselves these internal stories is key to the creation of that concept. But when did we acquire this ability? And is it universal? Clearly, it has something to do with the acquisition of language. And if we accept a late origin of language, it certainly cannot have arisen more than 70-50,000 years before present. To conclude, here is an excerpt from a paper Corballis wrote for the Royal Society:

the evolution of language itself is intimately connected with the evolution of mental time travel. Language is exquisitely designed to express ‘who did what to whom, what is true of what, where, when and why’…and these are precisely the qualities needed to recount episodic memories. The same applies to the expression of future events—who will do what to whom, or what will happen to what, where, when and why, and what are we going to do about it…To a large extent, then, the stuff of mental time travel is also the stuff of language.

Language allows personal episodes and plans to be shared, enhancing the ability to plan and construct viable futures. To do so, though, requires ways of representing the elements of episodes: people; objects; actions; qualities; times of occurrence; and so forth…The recounting of mental time travel places a considerable and, perhaps, uniquely human burden on communication, since there must be ways of referring to different points in time—past, present and future—and to locations other than that of the present. Different cultures have solved these problems in different ways. Many languages use tense as a way of modifying verbs to indicate the time of an episode, and to make other temporal distinctions, such as that between continuous action and completed action. Some languages, such as Chinese, have no tenses, but indicate time through other means, such as adverbs or aspect markers. The language spoken by the Pirahã, a tribe of some 200 people in Brazil, has only a very primitive way of talking about relative time, in the form of two tense-like morphemes, which seem to indicate simply whether an event is in the present or not, and Pirahã are said to live largely in the present.

Reference to space may have a basis in hippocampal function; as noted earlier, current theories suggest that the hippocampus provides the mechanism for the retrieval of memories based on spatial cues. It has also been suggested that, in humans, the hippocampus may encompass temporal coding, perhaps through analogy with space; thus, most prepositions referring to time are borrowed from those referring to space. In English, for example, words such as at, about, around, between, among, along, across, opposite, against, from, to and through are fundamentally spatial, but are also employed to refer to time, although a few, such as since or until, apply only to the time dimension. It has been suggested that the hippocampus may have undergone modification in human evolution, such that the right hippocampus is responsible for the retrieval of spatial information, and the left for temporal (episodic or autobiographical) information. It remains unclear whether the left hippocampal specialization is a consequence of left hemispheric specialization for language, or of the incorporation of time into human consciousness of past and future, but either way it reinforces the link between language and mental time travel.

The most striking parallel between language and mental time travel has to do with generativity. We generate episodes from basic vocabularies of events, just as we generate sentences to describe them. It is the properties of generativity and recursiveness that, perhaps, most clearly single out language as a uniquely human capacity. The rules governing the generation of sentences about episodes must depend partly on the way in which the episodes themselves are constructed, but added rules are required by the constraints of the communication medium itself. Speech, for example, requires that the account of an event that is structured in space–time be linearized, or reduced to a temporal sequence of events. Sign languages allow more freedom to incorporate spatial as well as temporal structure, but still require conventions. For example, in American sign language, the time at which an event occurred is indicated spatially, with the continuum of past to future running from behind the body to the front of the body.

Of course, language is not wholly dependent on mental time travel. We can talk freely about semantic knowledge without reference to events in time… However, it is mental time travel that forced communication to incorporate the time dimension, and to deal with reference to elements of the world, and combinations of those elements, that are not immediately available to the senses. It is these factors, we suggest, that were in large part responsible for the development of grammars. Given the variety of ways in which grammars are constructed, such as the different ways in which time is marked in different languages, we suspect that grammar is not so much a product of some innately determined universal grammar as it is a product of culture and human ingenuity, constrained by brain structure.

Mental time travel and the shaping of the human mind (The Royal Society)

Next time, we’ll take a look at another unique recursive ability of the human mind: the ability to infer the thoughts and emotions of other people, a.k.a. the Theory of Mind.