New Old Architecture

In Europe, you are surrounded everywhere you turn by majestic stone architecture. The Gothic cathedrals and castles that occupy the cities and countryside of Europe provide copious examples of the wonder and beauty of stone architecture.

Stone is one of the oldest materials used in construction. It is also the most durable. The oldest surviving buildings in the world are carved from stone, or made from assembled stones. Göbekli Tepe is made from stone. The Pyramids are made from stone. It’s also a very local material—when you are building from local stone, it gives a place a distinctive feel. That’s why Paris looks the way that it does: the cream-colored Lutetian limestone quarried from the banks of the Seine.

The reason, I think, that we find stone such a compelling material is because, before we built our own structures, we occupied “natural rooms” that the earth made for us—caves. The stone walls of caves, lit by tallow lamps and torches, and illuminated with spectacular artwork, were out earliest permanent homes, and our earliest cathedrals.

The stone walls in a Gothic cathedral or medieval castle are bearing walls, being both the source of shelter from the elements and supporting the overall structure. The walls of modern buildings, by contrast, do not carry any load besides their own weight. The structure is separate, usually a skeleton frame of steel or concrete. These are typically either curtain walls (which are clipped onto or tied back to the supporting structure), or infill walls (sitting on the structure and filling the gaps within it.)

Bearing Walls: Monolithic Masonry Construction (Columbia University)

Walls in commercial construction today are usually cavity walls, consisting of a facing material held to the structure by some sort of clip system, creating a cavity between the supporting structure of the wall (typically studs or masonry) and the veneer. This cavity is designed to resist the penetration of water (since liquid moisture cannot leap a cavity*). The cavity also gives us a place to put the insulation.

Facing materials are usually panelized systems of fiber-cement, metal, porcelain, treated wood, phenolic resin, or some other weather-resistant material. Even in walls that appear to be solid brick or stone, the brick or stone is merely a facing material held by clips to a wall usually comprised of wood or metal studs.

Which is what made this article so fascinating to me: The miracle new sustainable product that’s revolutionising architecture – stone! (The Guardian)

The article talks about having stone be an actual self-supported wall rather than a thin veneer, and also a potential bearing material.

The article is based on a London exhibition of architecture which uses stone as a true building material rather than just a facade veneer. It’s entitled The New Stone Age. Here is the BBC’s coverage:

Design’s new stone age is here (BBC)

Featured prominently is 15 Clerkenwell Close, a six-story building by architect Amin Taha, which uses cut stones as the facing material of the building. The stone is deliberately left in the condition it is quarried in rather than being dressed, leading to variegated facade that resembles an urban ruin. This approach has pleased some, and left others so distressed that they launched a campaign to tear the building down!

…The result looks like what might have happened if Mies van der Rohe had been weaned on The Flintstones. It features a load-bearing exoskeleton made of massive chunks of limestone brought straight from the quarry.

The blocks have been left with their raw quarrying marks exposed and stacked on top of each other to form columns and beams. Some of the slabs’ faces show the lines where they were drilled from the rock face, others are sawn smooth as if cut by a cheese wire, while some bear the rugged texture of the sedimentary seam, freshly pried from the Earth’s crust.

The building’s geological power was too much for one Islington councillor, who complained that the “awful” building was out of keeping with the historic neighbourhood, and ensured that a demolition notice was issued, based on a supposed breach of planning permission. Taha finally won the case last year, on the proviso that the smooth stone be roughened up to look like the rest (a process which, after testing, has thankfully proven structurally too risky to carry out).

The problem with a solid stone wall is that there is no insulation or waterproofing layer as there is in a typical cavity wall. From the details on the architect’s web site, it looks there is a secondary wall behind the stone facade that accomplishes these functions. The stone is stabilized by metal anchors which tie it back to the main structure.

Another building very similar to the one discussed in the Guardian article is 30 Finsbury Square, also in London, by Eric Parry Architects. Unlike Clerkenwell Close, the stone here is dressed and smooth, and facade is designed in a rationalist manner reminiscent of Italian rationalists like Aldo Rossi or Guiseppe Terrangni.

There area few instances where stone is used as both shelter and bearing material. For example the article prominently features a photo of this winery in France:

Delas Frères Winery in France (ArchDaily)

And another example: a radio station in the Himalayas that appears to be built out of solid stonework. It’s difficult to imagine this being built anywhere else, though; I don’t think you could build something like this in downtown London or an American suburb.

The BBC article mentions Jorn Utzon’s (Sydney Opera House) own Can Lis house in Mallorca

The article also prominently features a French firm, Perraudin Architecture, which builds using stone as a structural material, as opposed to just a veneer or facade material. This gives their projects an amazing texture and heft that you just don’t see often in modern architecture. I would imagine France has a tradition of stonemasonry that goes very far back, indeed.

House made of solid stone in Lyon by Perraudin Architecture (Dezeen)

The building is entirely built up in load-bearing limestone walls of 40 cm. Precise coursing elevations define each stone, to be extracted, dimensioned and numbered in the quarry and then transported to the site. There, they are assembled like toy blocks using nothing but a thin bed of lime mortar.


And another example with timeless beauty—social housing in France built out of solid exposed stone walls:

The building is entirely built up in load-bearing limestone walls of 40 cm. Precise coursing elevations define each stone, to be extracted, dimensioned and numbered in the quarry and then transported to the site. There, they are assembled like toy blocks using nothing but a thin bed of lime mortar.

No paint or plaster was added to the walls, so the stone surfaces are left bare to display traces of the quarrying process. Projecting courses of stone on the exterior mark the boundaries between floors and help to direct rainwater away from the windows.

Social housing with solid stone walls by Perraudin Architecture (Dezeen)

But what the article emphasizes is quarried stone as a more environmentally-friendly alternative to concrete. The concrete production process produces and enormous amount of carbon dioxide, whereas stone can by used as quarried directly from the ground. From the Guardian article:

When you step inside the Building Centre, you are immediately confronted with a large model of a speculative proposal for a 30-storey office tower – designed to be made entirely from stone. It looks like a series of Clerkenwell Closes stacked on top of each other, the chunky stone columns getting progressively thinner as they rise towards the clouds.

“We wanted to prove that a solid stone tower is eminently possible,” says Taha, handing me a substantial technical report that makes a hard-nosed case for such a building on grounds of both cost and carbon footprint. Using stone for the core, structure and floors, they argue, would be 75% cheaper than a steel and concrete structure, and have 95% less embodied carbon. The primary reason for the saving is that, while concrete and steel have to be fireproofed, weathered, insulated, then clad, a stone exoskeleton can be left exposed…

“Stone,” says architect Amin Taha, “is the great forgotten material of our time. In 99% of cases, it’s cheaper and greener to use stone in a structural way, as opposed to concrete or steel, but we mostly just think of using it for cladding.”…

The tactile qualities of stone are clear, but, for Taha, the environmental argument is what makes it such an important material to champion. “As a profession, we’re not thinking clearly about the embodied energy of building materials,” he says. “The perverse thing about concrete is that you take limestone, crush it, then burn it, by which time it loses 60% of its structural strength – so you then have to put steel reinforcement inside it. It’s total madness.”

By embracing stone as combined superstructure and external architectural finish, he says, we can save 60-90% of CO2 emissions for these key building elements. “And we’re standing on a gigantic ball of molten rock, so we’re not going to run out of stone any time soon.”

The miracle new sustainable product that’s revolutionising architecture – stone! (The Guardian)

Compare this to concrete:

After water, concrete is the most widely used substance on Earth. If the cement industry were a country, it would be the third largest carbon dioxide emitter in the world with up to 2.8bn tonnes, surpassed only by China and the US….Taking in all stages of production, concrete is said to be responsible for 4-8% of the world’s CO2. Among materials, only coal, oil and gas are a greater source of greenhouse gases. Half of concrete’s CO2 emissions are created during the manufacture of clinker, the most-energy intensive part of the cement-making process.

But other environmental impacts are far less well understood. Concrete is a thirsty behemoth, sucking up almost a 10th of the world’s industrial water use. This often strains supplies for drinking and irrigation, because 75% of this consumption is in drought and water-stressed regions. In cities, concrete also adds to the heat-island effect by absorbing the warmth of the sun and trapping gases from car exhausts and air-conditioner units – though it is, at least, better than darker asphalt.

It also worsens the problem of silicosis and other respiratory diseases…

Concrete: the most destructive material on Earth (The Guardian)

Plus, there’s just something about the “feel” of natural stone that can’t be captured by other materials. That’s why it has faced our buildings since ancient Egypt to ancient Rome and medieval Europe. Due to its “natural” qualities and heft, a solid stone wall simply “feels” better than modern veneer walls, in my opinion. In addition, stone and brick acquire a warm, pleasing patina over time, and are amenable to all sort of creative expression not possible with other panelized systems, and certainly not in aluminum curtain walls. For example, Brick Expressionism (Wikipedia) was common in the early twentieth century, and the beauty, variety, and expressionism of carved stone veneers is evident.

Alongside its sustainable qualities, it’s the material’s one-off nature that really appeals to the design world. “People increasingly want the authentic beauty and inconsistencies of natural stone,” says Solid Nature’s David Mahyari, “imitation ceramic tiles include realistic veins but have a repeat pattern like wallpapers, so you can tell quickly that they’re fake.”

Its age is also a factor. “Stone is a material that is millions of years old. Can you imagine this? I am completely convinced that this dimension also changes the way we relate to a stone object, establishing a different kind of connection with it and making it, somehow, more precious.”

London-based stone carver Simon Smith backs this up: “If the stone ‘takes a polish’, it’s like opening the door of the stone and seeing deep into it, and millions of years back in time.”

I’m not alone. I recently ran across this paragraph describing a project that used stone for vertical shading devices (The Jackman Law Building at the University of Toronto). It explained why the designers fought for natural stone instead of precast concrete for the shading devices:

The choice of stone for the shade fins stems from an aspiration to counter a look of mindless mediocrity that [Architect Siamak] Hariri sees being inflicted on cities by the widespread use of ersatz materials. Imitations lack the dignity, patina, and subtle variety of natural materials, he says, and he advocates for beauty as a value in its own right, as well as for its contribution to durability: “A really good building is one that people will not let be taken down.”

Continuing Education: Vertical Shading Devices (Architectural Record)

Traditionally, stone and brick cannot span spaces except using an arch, a vault (basically an extruded arch), or a dome (a revolved arch). Thus, the structural material in stone buildings was often wood or timber, or steel beams or trusses in newer buildings.

There is a way, however, to have stone span spaces: the flat stone vault, which was patented in the 1600s by French engineer Joseph Abielle. Abeille’s vault has recently been used on an innovative project in Jerusalem: a gift shop added on to an old Crusader church!:

The columns of the new shop are made out of massive stone, and the ceiling is a at stone vault composed of 169 interlocking voussoirs. The system is inspired by the invention of French engineer Joseph Abeille (1673-1756), who patented in 1699 a special system that allowed the building of at vaults.

The Flat Vault / AAU ANASTAS (Arch Daily)

The flat stone vault is completed! (AAU ANASTAS)

Years ago Low-Tech Magazine did a story on the Timbrel and Catalan vaults:

Tiles as a substitute for steel: the art of the timbrel vault (Low Tech Magazine)

The Nubian Vault is constructed of mud brick without requiring a temporary support:

They’ve even used such vaults to construct multi-story buildings without utilizing any concrete or steel:

The Sustainable Urban Dwelling Unit (SUDU) (No Tech Magazine)

These structures can be subsumed under the rubric of reciprocal supporting structures, in which each structural member supports every other member in turn, with a few members transferring the total load to the ground or supports (an interesting metaphor for society, no?). Reciprocal supporting structures are becoming increasingly popular.

Some examples of basic brick monolithic walls are Louis Khan’s Indian Institute of Management, and this house in Vietnam, which uses are perforated brick skin to wrap the house:

Incidentally, I’ve noticed a distinct trend in modern architecture to have not just a single skin, but to divide the exterior and from the interior using layers—for example a layer of sun screening, or a layer for privacy as in the house above, or wrapping balconies around the building to create a “semi-private” space, as in this building:

Colonnades line the terraces of Antonini Darmon’s Arches Boulogne apartments (Dezeen)

I wonder if cementitious foam insulation (Airkrete), sprayed inside a cavity, could give stone the necessary R-values to be used as an exterior wall without a second layer. Waterproofing could be accomplished by a hydrophobic coating. Such a wall would have decent thermal and waterproofing performance, not to mention be practically permanent (and beautiful, too!)

And there are some other promising new materials that can have both a structural use and give a beautiful texture.One that’s getting a lot of attention is cross-laminated timber (CLT). CLT consists of wood planks glued together to create a structurally stable panel (NLT—Nail Laminated Timber, uses nails to hold the planks together). The planks are set together at right angles to each other to provide structural stability, similar to how plywood is made, just at a larger scale. It’s part of growing suite of mass wood technologies:

Mass timber is a generic term that encompasses products of various sizes and functions, like glue-laminated (glulam) beams, laminated veneer lumber (LVL), nail-laminated timber (NLT), and dowel-laminated timber (DLT). But the most common and most familiar form of mass timber, the one that has opened up the most new architectural possibilities, is cross-laminated timber (CLT).

To create CLT, lumber boards that have been trimmed and kiln-dried are glued atop one another in layers, crosswise, with the grain of each layer facing against the grain of the layer adjacent.

Stacking boards together this way can create large slabs, up to a foot thick and as large as 18-feet-long by 98-feet-wide, though the average is something more like 10 by 40. (At this point, the size of slabs is restricted less by manufacturing limitations than by transportation limitations.)

Slabs of wood this large can match or exceed the performance of concrete and steel. CLT can be used to make floors, walls, ceilings — entire buildings.

The hottest new thing in sustainable building is, uh, wood (Vox)

What makes CLT so compelling are two things: the wood facing of the material provides a beautiful surface which can be left exposed on the inside (on the outside you will still require waterproofing, insulation and cladding). But perhaps the most attractive feature is that, since they are made from trees, they remove carbon from the air instead of increase it.

Unlike stone, wood is commonly used to span, and has been the most common material to do so since ancient times. The modern use of CLT leads to a wide range of structural expressions, with nearly endless variation:


Mass Timer Primer
(Canadian Architect)

CLT is the hot material of the moment, and there are many designers who are clamoring to build innovative large-scale structures in this material. There are all sort of proposals out there, from medium-sized buildings to skyscrapers (because we always have to build skyscrapers out of the hot new material for some reason). Although, for smaller-scale and residential structures, I wonder why structural insulated panels (SIPs) are not more popular. Those have been a round for a long time (an innovative use of SIPs is the Ashen Cabin by HANNAH Architecture and Design).

Once upon a time, wood was a primary building material across much of the world. But with industrialization, that changed in the West.

German architect Arnim Seidel explains that steel and concrete became the dominant building materials for to meet 20th-century demands: wide bridges, tall buildings, heavy loads.

“Wood came to be seen as backwards,” Seidel told DW.

Now, its environmental advantages are being recognized.

Materials like steel and concrete require massive amounts of energy to produce, and are usually transported over long distances. This emits CO2 that contributes to climate change.

By some estimates, producing a ton of concrete, or about a cubic meter, generates 410 kilograms of CO2 equivalent — the same amount of energy could power an average house for more than 10 days.

Locally harvested wood from sustainably managed forests not only has a much smaller carbon footprint in its production.

Using wood in buildings also sequesters carbon dioxide. When plants perform photosynthesis, this removes CO2 from the atmosphere and stores it in the wood.

“When we build with wood, we can conserve this stored CO2 for a longer period of time, and not emit it into the atmosphere,” Seidel told DW.

Wood: renewable construction material of the future? (DW)

Another material that is making a comeback is rammed earth:

Rammed earth is the descendant of ancient construction techniques like adobe or cob building. It can be used to build walls for many kinds of buildings, from houses to museums and even cemeteries.

The name says it all: it’s made of damp soil or earth that is placed in formwork, and then compressed or rammed into a solid, dense wall. As a construction technique, rammed earth almost disappeared with the development of reinforced concrete, but there has been a revival in interest because of its aesthetics and its perceived environmental benefits.

The carefully chosen mix of silt, sand, and gravel with a low clay content is moistened and then placed in about 4 inch deep layers between plywood forms; that’s why one sees the different colors and stripes, as often each layer is modified for aesthetic reasons. It used to be rammed by hand, but now powered rams are often used to reduce time and labor. Engineered structural reinforcing is often required.

Electric wiring and switch boxes can be built right into the wall as it goes up, so that a clean, interior earth finish can be maintained.

The structural potential of this material is more limited than the above materials. Cement-stabilized rammed earth has greater structural potential, but usually some sort of additional structure is used. Rammed earth walls tend to be mass walls, and this, along with other characteristics, limits them to fairly mild, drier climates such as the American Southwest, the Mediterranean, and Australia for building.

Like locally-quarried stone, using the earth from the site as a building material also anchors the building to the unique place, and allows us to surround ourselves with materials that look millions of years back in time.

The Dirt on Rammed Earth (Treehugger)

The world’s most beautiful homes are also down to earth (Curbed)

In summary, there are lot of innovative materials, and new ways to use old materials that add up to a lot of design possibilities for building design going forward. Let’s hope we can rise to the challenge and create a more inspiring built environment than has often been the case in the recent past.

* Of course it can, really, such as wind driven rain, but I’m trying to keep this simple!

Fun Facts

It’s time for a summer edition of fun facts!

The highest paid athlete of all time was a Roman charioteer; if he had lived today he would have been worth $15 billion.
https://www.thevintagenews.com/2017/01/18/the-highest-paid-athlete-of-all-time-was-a-roman-charioteer-if-he-had-lived-today-he-would-have-been-worth-15-billion/

Air pollution is responsible for shortening people’s lives worldwide on a scale far greater than wars and other forms of violence, parasitic and vector-born diseases such as malaria, HIV/AIDS and smoking.
https://www.escardio.org/The-ESC/Press-Office/Press-releases/The-world-faces-an-air-pollution-pandemic

California loses up to $1 billion in crops each year because of air pollution.
https://www.theverge.com/2020/3/16/21181725/air-pollution-california-crops-agriculture-1-billion

By 2010, 43,600 jobs had been lost or displaced in Michigan – and about 700,000 in the United States – due to the rise in the trade deficit with Mexico alone since NAFTA was enacted in 1994.
https://www.citizen.org/article/michigan-job-loss-during-the-nafta-wto-period/

Distribution of Household Wealth in the U.S. since 1989 (Federal Reserve)

As of December 2016, more than 129 million Americans have only one option for broadband internet service in their area – equating to about 40 percent of the country.
https://sites.psu.edu/netneutrality/2018/02/28/the-internet-monopoly/

The average consumer throws away 60 percent of clothing within a year of purchase.
https://www.treehugger.com/sustainable-fashion/we-throw-away-far-too-much-clothing.html

TIL there are more payday loan stores in the US than there are Starbucks or McDonald’s.
https://research.stlouisfed.org/publications/page1-econ/2019/04/10/fast-cash-and-payday-loans

‘Idiot’ once specifically referred to somebody with the mental age of a 2 year old. ‘Imbecile’ referred to somebody with the mental age of a 3 to 7 year old, and ‘Moron’ referred to somebody with the mental age of a 7 to10 year old.
https://eugenicsarchive.ca/discover/tree/53480acd132156674b0002c3

If cows were a country, they would be the third-largest greenhouse gas emitter in the world.

Three out of four new or emerging infectious diseases are zoonotic.

There are fewer American farmers today than there were during the Civil War, despite America’s population being nearly 11 times greater.
https://www.nytimes.com/2020/05/21/opinion/coronavirus-meat-vegetarianism.html

5 Companies own 80% of all stock in S&P 500 listed companies.
https://www.reddit.com/r/LateStageImperialism/comments/cbftd4/monopoly_the_deathknell_of_capitalism/

France’s longest border is with Brazil.
https://www.indexmundi.com/france/land_boundaries.html

Rudolph Hass, the man who grew and patented the original Hass avocado tree, didn’t make very much money despite its success as most people bought one single tree and then grew vast orchards from cuttings. He only made $5000 from his patent, and remained a postman his entire life.
https://en.wikipedia.org/wiki/Hass_avocado#History

The largest ancient pyramid in the world is buried inside a mountain in modern-day Mexico underneath a church.
https://www.bbc.com/future/article/20160812-the-giant-pyramid-hidden-inside-a-mountain

There is an inverse correlation between the amount of money spent on a wedding, and how long the marriage lasts. The more people spend on the ceremony, the more likely the couple will get divorced.
https://www.insider.com/study-couples-who-spend-more-on-weddings-more-likely-to-get-divorced-2018-7

As of 2018, there are 6 PR people for every journalist. Much of the change is attributed the 45% loss of newspaper employees from 2008 to 2017. Additionally the current median income of PR professions is $61,150 in comparison to journalists $46,270.
https://muckrack.com/blog/2018/09/06/there-are-now-more-than-6-pr-pros-for-every-journalist

Brazil has nearly 60,000 murders a year, more than the US, Canada, Australia, all of Europe, China, and many Asian countries combined.
metrocosm.com/homicides-brazil-vs-world/

SSC, Doxxing, and Julian Jaynes

I was planning to comment on the writeup that Slate Star Codex did on The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes, which I was surprised had not been covered before. I suppose I’ll do it sooner rather than later, since Slate Star Codex has since been taken down. I guess that means I’ll be commenting on both topics.

As most of you are probably aware by now, the New York Times was planning on running an article about the blog that would have revealed the author’s real full name. The author, who blogs under the pen name Scott Alexander, claimed that the Times was going to “doxx” him, and that he needed to remain anonymous for professional reasons. As he describes it, removing the blog was the only way to stop the story from going out.

Now, I think the reasons he wished to remain anonymous were 100% legitimate: as a professional, there are certain ethical standards that you have to uphold, and if you have patients, having them able to read your opinions probably would color the doctor/patient relationship, which is particularly important with something like psychiatric counseling. And he also thought that being named in the New York Times would make him easier to locate, and that this would endanger the housemates he lives with, because he has received a number death threats in the past which he apparently believes are credible (as an aside: can anyone express an opinion today without receiving death threats? What does that say about our society?)

I don’t know about using the term “doxxing” though; that seems intentionally hyperbolic. From my understanding, “doxxing” implies malicious intent. It’s deliberately publishing details about a person’s offline identity in order to threaten, harass, intimidate, or bully that person. The Times was doing no such thing—for better or worse, their policy was to use people’s real names unless there was a compelling reason to maintain a person’s anonymity (such as informants, whistleblowers, etc). You can certainly argue whether or not that’s a good policy (and I’m sure a lot of people think that it isn’t), but I’m sure the Times had their reasons, and there was no deliberate intent to harm Alexander or anyone else as far as I can tell form the story. For what it’s worth, I suspect this will eventually prompt the Times to change their policy, and the blog will be up again at some point in the future, so if you’re a fan of it, I wouldn’t worry.

Now, I’m hardly unbiased in this case. I too blog under a pseudonym, but for different reasons. I don’t have professional reasons to not use my real name, as I don’t have patients or clients. I do often have knowledge of confidential projects in my area, but I stringently make sure never to discuss my job or any of my professional work on this blog. And I’ve never received death threats, but even if I did, well, I live alone so if someone did decide to take me out, all that would happen is that I’d end up as dead on the outside as I am on the inside. It might even be doing me a favor.

Rather, I do it because I need to earn money to survive, and I don’t want potential employers to Google my name and find this blog or any of my opinions, even though I think they’re hardly radical or extreme. It’s sad that I have to worry about this, but that’s the world we live in. It also calls into question just how much “freedom’ we really have in modern capitalist societies, but that’s a larger topic for another time. I’m scared shitless what would come up if I actually did google my real name, so I’ve never done it. When Jim put up my recent interview on The Attack Ads! Podcast, he initially published my real name, but he was kind enough to remove it and replace it with my pen name (kinder, it seems, than the New York Times!)

I have been doxxed in real life, however, and it was not a pleasant experience. I might as well go ahead and tell the story.

The last job I had before the one I have now was for a local architecture firm, which allowed me to practice again. I put the name of my employer on my Facebook profile (I know, I know, but we’ve all done stupid things in life that make us go ‘what were you thinking?’ in retrospect).

I had an acrimonious exchange on Facebook with some random asshole, but what I didn’t know was that this random asshole happened to know one of my co-workers at this firm (who was also an asshole). Thus, armed for revenge, he sent the exchange to this scumbag, who subsequently printed it out and literally took it from desk to desk around the entire firm, and directly to the firm’s managers/owners befoe I even knew what was happening. Clearly this person was an absolute sociopath, who—like so many Americans—enjoys destroying people for sport and twisting the knife simply because he can. I was sternly reprimanded by the firm’s leaders, and I’m sure it was a major factor in my eventual dismissal, effectively ending my professional career. Oh, and this incident exactly coincided with my mother’s final months dying of cancer.

So doxxing isn’t a good thing.

And it’s not like this was an isolated incident, either. I’ve had many, many experiences like this over my professional career and in my life experience—enough that’s it’s routine by now. Perhaps I just attract bullies. Incidents like this have convinced me that people are inherently cruel and evil, and will absolutely hurt you the minute they get the chance. It has led to my developing misanthropy and paranoia. I still have many PTSD symptoms including nightmares about that job.

Of course, I immediately deleted my Facebook profile. I do currently have one under a false name, but only because I still needed to sell some of my mother’s hoarded stuff online. I don’t post anything there or have any personal info, of course. In order to have access to the Marketplace, you need to have what Facebook considers to be a valid profile (presumably to deter scammers), so I signed up for a couple of groups to make the algorithm think I’m a real person and let me have access. One was about Cardinals. The other was a Julian Jaynes discussion group.

Which finally brings us around full circle to the real subject matter at hand. I’m writing this now because I have to go from memory, as the original post is obviously no longer online.

Alexander begins by “rewriting” the book along similar lines, keeping the parts of the premise he thinks are valuable, and omitting the parts that he thinks are incorrect or speculative. This allows him to summarize the book that he thinks Jaynes “should have written.”

I actually enjoyed this approach. Unlike most Julian Jaynes fans, I’m not a Jaynes absolutist. I’ve noticed that most Jaynes enthusiasts accept 100% of his thesis and tend to treat the book as holy writ. I like to pick and choose what I think is correct.

Alexander claims that what Jaynes was actually describing was the beginning of Theory of Mind, rather than consciousness in the Jaynesian sense.

Now, I do think that Jaynes’s choice of the term consciousness is problematic. Jaynes’s supporters will always point out that he goes to great lengths to define what he means by consciousness, and they’re right—he does! But the thing is, if you have to go to such lengths to define what you mean by a term, then the term is poorly chosen. For the average person, consciousness is just the state of being awake, and when they hear that Jaynes is claiming that ancient people lacked consciousnesses, even though he explains what he means by that (their awareness was different than ours), most people will still reject the thesis outright. In other words, merely by choosing this term, you start out in a hole, and you have to spend a lot of time digging out of it before you can even do the heavy lifting. And when you’ve got a thesis as “out there” as Jaynes does, that’s even more of a problem.

I wrote about Theory of Mind in my series of posts about the Origin of Religion. From my understanding, Theory of Mind is the ability to understand that others have thoughts, feelings and ideas different than your own. From this perspective, then, Jaynes would be arguing that an ancient Greek person would be unable to perceive that his fellow Greeks had different thoughts or possessed different knowledge than he did. Put another way, an ancient Greek person at the time of Homer would fail the Sally-Anne test.

But as far as I can tell, that’s not what Jaynes was saying at all! I find it hard to believe that the author got this concept wrong, considering he’s allegedly a psychiatrist. Maybe there’s some confusion of terminology here. Voice hearing has nothing to do with this ability. As far as I know, voice hearers and schizophrenics are still aware that other people have minds of their own.

Instead, the term I would use for what Jaynes is describing is meta-consciousness, or meta-awareness. This would mean that the book’s title would be The Origin of Meta-consciousness in the Breakdown of the Bicameral Mind, which I think is clearer. That concept is is different than Theory of Mind. I would define meta consciousness as being conscious of one’s own mental states. In this paradigm, consciousness is a thing that can be thought about and contemplated separately from one’s direct experience; whereas before thoughts are just thoughts–there is no conceptual entity that these thoughts are assigned to that allows you to stand back from one’s own thoughts and reflect on them. It would be like trying to see your own eyeball without a reflection.

When people did have thoughts expressed as language inside their own heads (as opposed to verbalizations), they assigned these thoughts to a conceptual entity that has come down to us as “gods.” With the slipperiness of language, it’s possible that word “god” simply referred to this inner voice, rather than a “real” person as often depicted. To aid this conception, this inner voice was assigned a persona–the persona of the god. Statues were made of these imaginary entities who were the source of such voices. They became cultural touchstones. Both temples and statues were expressly designed to “call forth” this inner voice and hear the god’s command (i.e. induce hallucinations).

What they did NOT have was a conception of “inner self” or “soul” that these inner vocalizations could be assigned to. At least, not yet. Over time, they developed this conceptual framework though the expansion of metaphor, and this entity became the source of these nonverbalized thoughts rather than a “god.” They heard this voice, then, not as a hallucination commanding them to do things (or, rather, what we would term a hallucination), but more of a voice that was under their conscious control as surely as the ones that gave rise to verbal communication between their fellow men. “Consciousness is (a mental process creating) an introspectable mind-space.” That “introspectable mind space” is different than theory of mind, which has to do with how we perceive others.

Previously, I suggested that this was somehow related to the mind’s ability to grasp recursion, based on Douglas Hofstadter’s ideas about the recursive nature of consciousness. Once the mind could grasp the principle of recursion, it could develop meta-awareness, which is turning thoughts back on oneself as if in hall of mirrors. This allowed for the development of a new kind of counsciousness which allowed people to perceive the voices in one’s head as as originating from the ‘self’ rather than a ‘god.’ I noted that the few populations who do not seem to have recursive structures in their language do indeed seem to have very fluid and undefined senses of self by our standards, and are prone to what from our vantage point would be hallucinations. This is speculation, however.

Alexander claims that Jaynes pins the breakdown on bicameral consciousnesses on increased trading during the Bronze Age, and the requirement to deal with other people in order to trade. To negotiate deals, you need to be able to put yourself in the mind of another person. Since he is operating on the assumption that Jaynes was talking about theory of mind, this makes sense. But Jaynes wasn’t really talking about this at all.

Although Jaynes does mention the increased trading during the Bronze Age, it is more the need for novel behaviors in general that he pinpoints, rather than just the need to trade per se. Jaynes argues that bicameralism was useful in world where routine behaviors were the norm, and that people would hear the voices of their leaders in their heads commanding them what to do. In contrast, when such top-down command structures did not work—such as dealing with outsiders—it called forth new types of behavior, and this is what caused the breakdown of bicameral consciousness, not simply trade.

What’s also odd is that an even bigger culprit in Jaynes’s view is the advent of the written word, which Alexander omits completely. Oral cultures would favor bicameralism, because orders are passed down vocally from the leaders, who then become gods in their heads commanding them. But with the written word, one takes command of one’s own inner voice. You use your brain in a completely different manner in the act of reading than you do in a world where 100% of interpersonal communication is via speech. This seems like a much more likely explanation of the shift in brain function than just trade alone. Why not mention it? He also omits many of Jaynes’s ideas about the value of metaphor in language. Language is what allows us to construct the metaphorical self and the “Analog I.”

Alexander briefly mentions that Jaynes’s conception of the split brain was based on Micheal Gazzinaga’s research (and through him Roger Sperry), and that a lot of this research has been debunked or superseded. He offers no sources to back up this claim, however. I was surprised by this, because one would have thought that if anyone, a psychiatrist–who is a doctor that specializes in the brain after all–would have more qualifications here than anywhere else. From my readings, it appears that a good portion of Jaynes’s claims about how the mind processes language across the hemispheres has comported with newer research, even if the entire concept of bicameralism has not been.

There is also no mention of the reassessment of Jaynes thesis by an cross-disciplinary team in 2007 that expressed qualified support for it: The bicameral mind 30 years on: a critical reappraisal of Julian Jaynes’ hypothesis. From what I recall from Charles Feryhough’s The Voices Within, there has been some empirical support for Jaynes’s model of how the brain hears voices in recent research.

Neuroscience Confirms Julian Jaynes’s Neurological Model (The Julian Jaynes Society)

Split-Brain Researchers Are Split (Psychology Today)

There is also no mention of Jaynes’s ideas on hypnotism, which is strange. Most people associate Jaynes’s ideas with schizophrenia, which is the hearing of voices, after all. But Jaynes also claimed that his ideas explained hypnotism—hypnotism was a throwback to bicameral consciousness where verbal commands would trigger a trance mode. Both schizophrenia and hypnotism are “throwbacks” to bicameral consciousness, he argued. He even claims that there is no other valid explanation for this hypnotic state in the psychological literature; rather, it’s just handwaved away. As he writes:

…hypnosis is the black sheep of the family of problems which constitute psychology. It wanders in and out of laboratories and carnivals and clinics and village halls like an unwanted anomaly. It never seems to straighten up and resolve itself into the firmer properties of scientific theory. Indeed, its very possibility seems like a denial of our immediate ideas about conscious self-control on the one hand, and our scientific idea about personality on the other. Yet it should be conspicuous that any theory of consciousness and its origin, if it is to be responsible, must face the difficulty of this deviant type of behavioral control.

I think my answer to the opening question in obvious: hypnosis can cause this extra enabling because it engages the general bicameral paradigm which allows a more absolute control over behavior than is possible with consciousness. (original emphasis)

Whether he’s right or not, conventional psychology really does offer no good explanation for hypnotism, reinforcing his point. Hypnotherapy is a legitimate method of therapy nowadays, yet we have no real idea how or why it works!

Finally, Alexander does raise an objection I’ve always had, namely that if Jaynes’s thesis is correct, then anthropologists should have discovered a true bicameral culture somewhere in the world by now, especially in very remote cultures that have been cut off from the wider world. He notes that there are a lot of strange things going on with consciousness detailed in the anthropological literature, but nothing that rises to Jaynes’s description. He also notes that anthropological descriptions that comport somewhat with Jaynes’s description may have been published in various later books.

I believe he’s referring to Gods, Voices, and the Bicameral Mind: The Theories of Julian Jaynes, which is published by the Julian Jaynes society. I’ve been wanting to get a hold of that book, but have been reluctant due to recent events. But I’ve heard Jaynes’s partisans claim that bicameral consciousness has in fact been documented in the anthropological literature, and that the book contains some papers documenting this. So maybe I’m off base here.

Yes, it does seem that something exceptional is going on with the consciousness of pre-contact peoples, but nonetheless, it’s still a bit different than the scenario Jaynes describes in the book. People will mention the Pirahã for example. And while it’s true that there are any number of anomalous events recorded in descriptions of them, they are still different than the bicameral civilization as Jaynes outlines it.

This is often explained by claiming that bicameral consciousness was not a trait of small tribal peoples, but only began with the shift to larger societies during the Mesolithic period. They will point to the construction of large structures like the recently discovered prehistoric circle of shafts near Stonehenge as a sign of the onset of bicameralism. In chapter 1 of book two, he writes:

With but few exceptions, the plan of human group habitation from the end of the Mesolithic up to the relatively recent eras is of a god-house surrounded by man-houses.

Adding on to the idea of god houses, he also pinpoints this as the reason for the elaborate burials of deceased god-kings with grave goods:

The burial of the important dead as if they still lived is common to almost all these ancient cultures whose architecture we have just looked at. This practice has no clear explanation except that their voices were still being heard by the living, and were perhaps demanding such accommodation…these dead kings, propped up on stones, whose voices were hallucinated by the living, were the first gods. (p. 379)

Just about all ancient cultures, from the Near East, to Mesoamerica, to China, look after the departed with goods, food and offerings, and Jaynes claims this is because bicameral man still hallucinated the voices of the dead god-kings in their heads. These elaborate burials and town layouts do not occur with scattered bands of hunter-gatherers such as the Pirahã, or Australian aborigines, or any of the isolated cultures were are likely to find, goes the argument. In Jaynes conception, “early cultures develop into bicameral kingdoms.” And so it’s no surprise that we wouldn’t find any such civilization that we can document anthropolgically, say Jaynes’s defenders.

But I still insist we would have found something similar to this by now. There’s a lot of anthropological literature across a wide range of cultures across the entire world. In this conception, bicameralism is a transient phenomenon which arrives with the onset of larger cultures, and then disappears when those cultures come into contract with outsiders, or become literate. This would mean that bicameralism is a phenomena lasting only a few thousand years at most. I don’t know if I’m willing to accept that.

Overall, aside from my quibbles above, I think the review did a good job of describing Jaynes’s ideas and taking them seriously on their own terms. I particularly liked how the author wrote how the standard depiction of the numerous depictions of gods and men speaking directly to each other as simply metaphorical is basically “kind of cheating”–in a way it is. If we take these phenomena seriously just as they were described, and didn’t use the cheats and dodges of “it’s all just metaphorical” then we come to very different conclusions.

For what it’s worth, I have an alternative concept of Jaynes that I’ve been meaning to write up for a while now. This obviously isn’t the time or the place. But my argument is essentially that, to borrow from Ran Prieur, “ancient people weren’t schizophrenic, they were tripping.” I think Ran’s basically correct. They weren’t literally tripping, of course–its just that their brains were working in way more similar to a modern person on psychedelics than a modern person’s everyday consciousness. Of course, tripping people often hear voices and “see” entities as a matter of course. Any state of consciousness that the brain can achieve with a drug it can achieve without that drug.

The descriptor of this comes from Robin Carhart-Harris’s work on psychedelics in the treatment of psychological disorders. He uses the term “entropy” to describe the differences in how the brain works on a psychedelic versus “normal” consciousness. Entropic brains have a much less defined sense of self, and process the world around them in a fundamentally different way than less entropic ones. I think the way ancient people processed the world was something closer to the entropic brain on a psychedelic, or to the way children perceive things (incidentally, meditation has been shown to increase brain entropy). Why this was the case I’m not sure, but it may have to do with the fact that our own brains probably produce DMT, and that the level may have dropped over time. This could be because instrumental rationality became more adaptive to environments where our major challenge was dealing with other people rather than with nature directly as societies grew larger and more complex. This changed our style of thinking from “primary consciousness” to “secondary consciousness”:

This article proposes that states such as the psychedelic state, REM sleep, the onset-phase of psychosis and the dreamy-state of temporal lobe epilepsy are examples of a regressive style of cognition that is qualitatively different to the normal waking consciousness of healthy adult humans. We will refer to this mode of cognition as “primary consciousness” and the states themselves as “primary states.” To enter a primary state from normal waking consciousness, it is proposed that the brain must undergo a “phase transition”, just as there must have been a phase-transition in the evolution of human consciousness with the relatively rapid development of the ego and its capacity for metacognition. This implies that the relationship between normal waking consciousness and “primary consciousness” is not perfectly continuous.

The entropic brain: a theory of conscious states informed by neuroimaging research with psychedelic drugs (Frontiers in Neuroscience)

The Free Market Is A Failure

Sorry for the deliberately click-bait-y headline, but I think this message is important to get out there.

In my discussions few months back on What is Neoliberalism, I noted that a core element of neoliberal philosophy is that markets are the only efficient, effective and rational way to distribute goods and services.

Neoliberals profess the idea that only competitive markets can allocate “scarce” resources efficiently, and that it is only such “free” markets that can lift people out of poverty and deliver broad prosperity. They pound it into our heads constantly.

Yet the Covid-19 crisis has illustrated spectacular and pervasive failures of such “free” markets all over the globe, and especially in the U.S. Instead of fairness or efficiency, we see systemic failure in every market we look: the food industry, the medical industry, the retail industry, the employment market. Resources are being destroyed and misallocated on a massive scale

Let’s start with the food industry, because food is the most important thing (nine means from anarchy, and all that). Thousands and thousands of pigs are being slaughtered, their meat left to rot, eaten by no-one, regardless of the forces of supply and demand:

The United States faces a major meat shortage due to virus infections at processing plants. It means millions of pigs could be put down without ever making it to table…

Boerboom, a third-generation hog farmer, is just one of the tens of thousands of US pork producers who are facing a stark reality: although demand for their products is high in the nation’s grocery stores, they may have to euthanise and dispose of millions of pigs due to a breakdown in the American food supply chain.

Meat shortage leaves US farmers with ‘mind-blowing’ choice (BBC)

Potatoes are sitting in Belgian warehouses and left to rot, only two short years after a drought threatened to produce a severe shortage:

Belgium: Lighthearted campaign to ‘eat more fries’ aims to lift heavy load (DW)

Meanwhile, dairy farmers in the U.S. heartland are dumping milk into the ground, to be drunk by no one.

Cows don’t shut off: Why this farmer had to dump 30,000 gallons of milk (USA Today)

In fact, the whole food situation is rather ugly, as this piece from The Guardian summarizes:

This March and April, even as an astounding 30 million Americans plunged into unemployment and food bank needs soared, farmers across the US destroyed heartbreaking amounts of food to stem mounting financial losses.

In scenes reminiscent of the Great Depression, dairy farmers dumped lakes of fresh cow’s milk (3.7m gallons a day in early April, now about 1.5 million per day), hog and chicken farmers aborted piglets and euthanized hens by the thousands, and crop growers plowed acres of vegetables into the ground as the nation’s brittle and anarchic food supply chain began to snap and crumble.

After delays and reports of concealing worker complaints, meatpacking plants that slaughter and process hundreds of thousands of animals a day ground to a halt as coronavirus cases spread like wildfire among workers packed tightly together on dizzyingly fast assembly lines.

Meanwhile, immigrant farmworkers toiled in the eye of the coronavirus storm, working and living in crowded dangerous conditions at poverty wages; at one Washington state orchard, half the workers tested positive for Covid-19. Yet many of these hardest working of Americans were deprived of economic relief, as they are undocumented. Advocates report more farmworkers showing up at food banks – and some unable to access food aid because they can’t afford the gas to get there.

None of this is acceptable or necessary and it’s not just about Covid-19, it’s also illustrative of a deeply deregulated corporate capitalism. America’s food system meltdown amid the pandemic has been long-developing, and a primary cause is decades of corporate centralization and a chaotic array of policies designed to prop up agribusiness profits at any cost.

Farmers are destroying mountains of food. Here’s what to do about it (Guardian)

That doesn’t sound very “efficient” to me, does it? How about you? Free market fundamentalists, care to weigh in?

Meanwhile, hospitals in the United States, which one would think are the most important thing to keep open during a pandemic, are actually closing across the country. These are the very things you want most to be open! Why is this happening? Because health care in the U.S. is a profit-driven enterprise that “competes” in the free market. Because elective procedures—their cash cow—have either been suspended or postponed. U.S. hospitals are closing because they are dependent upon these elective procedures to shore up their profits, and markets rely on profits.

As the deadly virus has spread beyond urban hotspots, many more small hospitals across the country are on the verge of financial ruin as they’ve been forced to cancel elective procedures, one of the few dependable sources of revenue. Williamson Memorial and similar facilities have been struggling since long before the pandemic — at least 170 rural hospitals have shut down since 2005, according to University of North Carolina research on rural hospital closures.

But even as hospitals in cities like New York City and Detroit have been deluged with coronavirus patients, many rural facilities now have the opposite problem: their beds are near-empty, their operating rooms are silent, and they’re bleeding cash.

More than 100 hospitals and hospital systems around the country have already furloughed tens of thousands of employees, according to a tally by industry news outlet Becker’s Hospital Review. They’ve sent home nurses and support staffers who would be deemed essential under state stay-home orders.

Rural hospitals are facing financial ruin and furloughing staff during the coronavirus pandemic (CNN)

And how about allocating labor via impersonal markets? How’s that going? Well, not so well. The workers with the skills most desperately needed on the front lines during the crisis are taking pay cuts and getting laid off left and right. Instead of contributing, they are sitting at home, unable to work even if they wanted to:

At a time when medical professionals are putting their lives at risk, tens of thousands of doctors in the United States are taking large pay cuts. And even as some parts of the US are talking of desperate shortages in nursing staff, elsewhere in the country many nurses are being told to stay at home without pay.

That is because American healthcare companies are looking to cut costs as they struggle to generate revenue during the coronavirus crisis.

“Nurses are being called heroes,” Mariya Buxton says, clearly upset. “But I just really don’t feel like a hero right now because I’m not doing my part.”

Ms Buxton is a paediatric nurse in St Paul, Minnesota, but has been asked to stay at home.

At the unit at which Ms Buxton worked, and at hospitals across most of the country, medical procedures that are not deemed to be urgent have been stopped. That has meant a massive loss of income.

Coronavirus: Why so many US nurses are out of work (BBC)

It’s an ironic twist as the coronavirus pandemic sweeps the nation: The very workers tasked with treating those afflicted with the virus are losing work in droves.

Emergency room visits are down. Non-urgent surgical procedures have largely been put on hold. Health care spending fell 18% in the first three months of the year. And 1.4 million health care workers lost their jobs in April, a sharp increase from the 42,000 reported in March, according to the Labor Department. Nearly 135,000 of the April losses were in hospitals.

As Hospitals Lose Revenue, More Than A Million Health Care Workers Lose Jobs (NPR)

So it doesn’t seem like “free and open” markets are doing so well with either health care or labor.

Meanwhile, U.S. states are competing against each other for desperately needed PPE equipment, bidding up the price and preventing scarce resources from going to where they are most badly needed, which would naturally be where Covid-19 has struck the hardest:

As coronavirus testing expands and more cases of infection are being identified, doctors, nurses and other healthcare workers are scrambling to find enough medical supplies to replenish their dwindling supply.

But state and local governments across the United States are vying to purchase the same equipment, creating a competitive market for those materials that drives up prices for everyone.

“A system that’s based on state and local governments looking out for themselves and competing with other state and local governments across the nation isn’t sustainable,” said John Cohen, an ABC News contributor and former acting Undersecretary of the Department of Homeland Security, “and if left to continue, we’ll certainly exacerbate the public health crisis we’re facing.”

“There’s a very real possibility,” he added, “that those state and local governments that have the most critical need won’t get the equipment they need.”

Competition among state, local governments creates bidding war for medical equipment (ABC News)

Yet neoliberals always tell us how important “competition” is in every arena of life.

Failure, failure, failure! Everywhere we look, we see failure. Pervasive, systematic failure. Resources going unused. Surpluses of food being dumped even while people go hungry and line up at food banks. Workers with necessary skills sitting at home, twiddling their thumbs. Other workers unable to even earn a living to support themselves and their families, no matter how badly they want to work. Masks and protective equipment NOT going to where they are most needed, their costs inflating, befitting no one except profiteers even as people die.

Tell me again about how the market is “efficient” at distributing resources. Tell me again about how central planning inevitably results in wasted resources, surfeits and shortages.

And here is the big, bold, underscored point:

The free-marketeers want to trumpet the market’s successes, but they don’t want to own its failures.

Free-market boosters always want to talk about the wonderful benefits of markets. How they allow multiple people to coordinate their activities across wide variations of space and time. How they allow knowledge to be distributed among many different actors. How they favor tacit knowledge that a single entity could not possess. Libraries of encomiums have been written celebrating the virtues of the “free” market. You know their names: The Provisioning of Paris, Economics in One Lesson, Free to Choose, I Pencil, and all of that. Much of what passes for economic “science” is simply cheerleading for markets– the bigger, freer and less-regulated the better.

Okay, fair enough.

But how about market failures? Why don’t they ever talk about that? Because if you read the economics books I cited above, you would come away with the idea that there are no market failures! That, in fact, there is no such thing. That markets, in effect, cannot fail!

If you want to own the successes, you need to own the failures.

Oh, they love, love, love to talk about central planning’s “failures”. They can’t get enough of that. They love to talk about empty shelves in the Soviet Union, long lines at supermarkets, the lack of toilet paper in Venezuela (amusingly, now a problem throughout the capitalist world), and the allegedly long waiting times in “socialized medicine” countries. We are constantly subjected to that drumbeat day after day after day. It’s part of every economics 101 course. Central planning doesn’t work. Central planning is inefficient. Central planning is “tyranny.”

But what about all that stuff I cited above?

Where are all the free-market fundamentalists now?

What is their excuse?

They’ll use special pleading. They’ll argue that it’s exceptional circumstances. That no one could have foreseen a “black swan” event like the global Covid-19 pandemic (despite numerous experts warning about it for years). They’ll tell us that markets work just fine under “normal” circumstances. They’ll say we cannot pass any kind of judgement on the failings of markets during such an unusual event.

Here’s why that argument is bullshit:

Pandemics are a real, and recurring phenomenon in human history. We’ve been incredibly fortunate that we’ve been in rare and atypical hundred-year period from 1918-1919 to today without a global pandemic or novel disease we couldn’t quickly contain and/or eradicate.

But pandemics are always—and always have always been—a societal threat, even if we’ve forgotten that fact. And the experts tell us that there will be a lot more of them in our future, with population overshoot, environmental destruction, encroachment on formerly unoccupied lands and climate change proceeding apace. What that means is this:

If your economic system can’t function properly during a pandemic, then your economic system is shit.

If your economic system only works when conditions are ideal, in fact depends upon conditions being ideal, then, your economic system doesn’t really work at all. If something like a pandemic causes it to seize up and fail, then your economic system is poorly designed and doesn’t work very well. Not only do the free markets graphed on economists’ chalkboards not exist in anywhere the real world, they apparently rely on a blissful Eden-like Arcadia to function as intended—a situation any causal glance at human history tells us is highly unusual. Any disruption and they fall like dominoes. They are about as resilient as tissue paper.

And the stresses are only going to get worse in the years ahead, with climate change making some areas uninhabitably hot, while other places are submerged under rising sea levels. And that’s before we get to the typical natural disasters like volcanic eruptions, tsunamis and earthquakes. And there will be new novel plant diseases as well, unfolding against the increasing resistance of germs to antibiotics.

Will the free market fundamentalists and libertarian market cheerleaders acknowledge this???

Don’t hold your breath.

No, they will continue to lionize “private initiative” at every opportunity, while completely ignoring the stuff I opened this post with. They’ll sweep it under the rug or, more likely, simply handwave it away. They’ll continue to say that we need to scale back government regulation and interference and let the invisible hand sort it all out.

Because discipline of modern economics as practiced today is not a science. It may not even rise to the level of a pseudoscience. It’s PR for laissez-faire capitalism.

Of course, we’ve had market failures before. They occurred all throughout the nineteenth century and during Great Depression, for example. These are well-documented. But many of the things that came out of those bygone market failures to prevent or mitigate them have been systematically and deliberately dismantled over the past generation due to rise of neoliberalism.

And now we’re paying the price.

Karl Polanyi made an important distinction between markets and Market Society. Markets are where people come together to buy, sell, and exchange surplus goods. These have existed throughout history. They are tangential to society; embedded in something larger than it. Such markets can be shut down without causing an existential threat to civilization.

But Market Society is dependent upon impersonal forces of supply and demand and functioning markets for absolutely everything in the society, from jobs to food to health care. Everything is oriented around maximizing private profits, and not human needs. Markets failing to function adequately lead to unemployment, sickness, starvation and death. Shutting them down is an existential threat to civilization.

As Dmitry Orlov wrote in his best-known work, the Russians survived the collapse of the Soviet Union precisely because they didn’t rely on the Market.

Naturalizing markets in this way is an abdication of both causal and moral responsibility for famines, a way to avoid reality and the ethical consequences for people in a position to change things. Markets are not given; they are predicated on a host of laws and social conventions that can, if the need arises, be changed. It makes no sense for American farmers to destroy produce they can’t sell while food banks are struggling to keep up with demand. This kind of thinking is a way for powerful people to outsource ethical choices to the market, but the market has no conscience.

Famine Is a Choice (Slate)

Now, to be clear I’m not necessarily making an argument for or against central planning as opposed to markets. That’s a different discussion.

But my core point is simply this: you cannot discuss market successes without discussing market failures. To do so is intellectually dishonest, disingenuous, and not to mention incredibly dangerous and irresponsible. If economics were a real science, instead of just PR for capitalism, it would take a look at the things I described above, and figure out ways they could have been avoided, regardless of any preconceived ideology or assumptions about the “right” way to arrange a society, or assumptions about how things “should” work. It would seek out ways for society to become, in Nassim Taleb’s terminology, “antifragile.”

But don’t hold your breath for that, either.

Attack Ads! Podcast

Jim and I chew the fat about the Nuisance Economy over that the Attack Ads! Podcast. It was fun to be a podcast guest once again, so I’m glad he had me on.

https://attackadspodcast.blogspot.com/2020/06/episode-152-nuisance-economy.html

Here’s a bit of our correspondence you might find interesting. I mentioned that Franklin Roosevelt did not have things like Fox News to contend with. He mentioned that there was a lot of co-opted media at the time that was very opposed to Roosevelt’s New Deal (mostly owned by rich newspaper barons). But my point was that television news did not exist, and television news is a completely different animal because it renders people more suggestible than when you actually have to parse words in written media. He replied:

Roosevelt dealt with privately-owned newspapers and (especially) radio, which has a power of its own. There is something about a well-modulated human voice to convey not just information but opinion.

You’re right about the light. There is something about flickering, low-light experiences which imprints on us easily. I’ve heard theories that tales told around the nightly campfire were the main method of imparting helpful wisdom, so our brains glommed on to those conditions for paying attention. Hence, the Latin word “focus,” which literally meant “domestic hearth.” Combine such a mental preference for optics with a human voice, both backed by vast fortunes and the need for their continuance, and… Oh, yeah, here we are!

We also talked a bit about the economics of Henry George via email. I’m somewhat familiar with George, but haven’t dived in too deep. Jim mentioned an economist working in the Georgist tradition called Mason Gaffney: https://masongaffney.org/

Gaffney is yet another economist banished from the “respectable” discipline for heresy (but not inaccuracy). As I’ve said so often, economics is really a type of theology.

He also said quite a few interesting things about rents and rent seeking. He turned me on tho this author: Gerrit De Geest. Chapter one of his book is available as a paper online: Rents: How Marketing Causes Inequality (Chapter 1)

De Geest’s argument is that wide wealth and income differentials are not primarily the results of differences in individual ability, intelligence, inventiveness, or “hard work.” Instead, he argues, they are the results of being able to capture outsize economics rents. This is done by distorting markets, and the primary means of distorting markets is (ironically) called marketing. Marketing today is the science of distorting markets for the benefit of businesses in order to extract outsize profits far in excess of the costs of production and distribution. This is everything from exploiting cognitive biases to vendor-lock-in, to extending copyright protection and many other techniques.

Furthermore, he claims, these techniques have reached such a high level of sophistication and ubiquity that nearly all markets everywhere are heavily distorted in some way towards rent-seeking, and consumers are often powerless to resist. He sees this as a under-represented reason for the rise of extreme inequality that we see everywhere today. And this is all perfectly legal. As he puts it, “business schools have outsmarted law schools.”

We’ll take a closer look at that another time.

A Theory About Stocks

A lot of people have been utterly mystified by the fact that the stock market seems to be going up and up and up, even as unemployment soars to Great Depression levels, pandemics shut down economies across the globe, American cities are in full revolt, the military is on the streets, clouds of locusts stalk Africa, hurricane season approaches, and the world just generally seems to be melting down around us.

How the f*ck can stocks still be going up???

First the obvious statement: stocks are not the economy. But I’m sure you already knew that. As Paul Krugman put it:

[W]henever you consider the economic implications of stock prices, you want to remember three rules. First, the stock market is not the economy. Second, the stock market is not the economy. Third, the stock market is not the economy. That is, the relationship between stock performance — largely driven by the oscillation between greed and fear — and real economic growth has always been somewhere between loose and nonexistent…Did I mention that the stock market is not the economy?

The stock market is simply a casino. Yes, yes, you’ll read in the economic textbooks written by very serious academic scholars using sophisticated academic terms like “capital resource allocation,” and “price discovery,” and all that, as well as about how distinguished gentlemen are doing very, very serious research and making totally rational assumptions about the future needs of society based on prospectuses and sober, realistic assessments of future earnings and capital flows and…blabbity-blabbity-blah.

Don’t believe a word of it—it’s a f*cking casino. That’s the best way to describe what’s going on when you strip away all the economic jargon designed to baffle us into thinking it’s something so sophisticated that us mere mortals with our puny brains cannot possibly understand.

News flash: Casinos aren’t rational!

Now, of course, it’s not just pure dumb luck like spinning a roulette wheel. It’s obvious that some companies have better future prospects than others. You can get information to make informed choices, just as you can count cards to do better at games of blackjack. But the future is inherently unknowable, and stocks are as much bets on the future as they are assessments of the present.

So what’s a stock worth? What someone else is willing to pay for it.

And I hope you’re smart enough not to buy in to economists’ explanation about how this is all totally rational. In fact, as anyone who has had a family member with a chronic gambling addiction in their lives knows all too well, it is precisely when one is gambling and seeking a windfall that one is at their least rational.

Furthermore, stock bubbles have been a persistent phenomenon through the entire history of capitalism, from the Tulip Bubble (where a single tulip bulb equaled a years’ wages), to the Mississippi bubble, to the South Sea Bubble, to railroad mania, the Great Depression, to 2008, and everything in between.

However, I have a theory as to why today’s stock market seems to be so awesomely divorced from the actual world as to seem like it’s on a totally different planet.

In the age of neoliberalism, with wages having been hollowed out for generations, gambling in the stock market is now the only realistic way to make money anymore. Internet gurus like Mr. Money Mustache gain fame and celebrity by telling us we can all get rich by pouring all our money into stonks and retire at 30! (seriously, this is what the guy says). Everywhere on the internet, everyone seems to have morphed overnight into little mini-J.P. Morgans, managing their oh-so complex portfolios, buying and selling their way into the one percent and ready to share their galaxy brain financial knowledge with the rest of us mortals. On Reddit, anyone not gambling in the market is a chump and deserves to starve!

Our only way to retire, we re told, is buying stocks. Even pension funds are invested in stocks. They even tried to put all our Social Security money into the stock market, for crying out loud (which totally wouldn’t have unrealistically inflated stock values at all, oh no!)

Where else is the money gonna go???

Basically neoliberalism has built everything around the edifice of the stock market. Everything is wrapped up in it. Absolutely everything is invested in these imaginary numbers, untethered from reality. It might as well be a f*cking video game score. To that end, we can all just extend and pretend forever. Why the hell not?

Stocks, stocks, stocks, stocks. Is it any wonder the market always goes up?

To be alive in America is to be assaulted by endless high-decibel blather about the critical importance of the stock market. There are entire TV channels devoted to it, new highs are always celebrated on network news, it’s on the front page of newspapers, it’s on an app that comes preinstalled on your iPhone, and the president is constantly yelling at you about it.

Yet the stock market has little direct relevance for regular people. By some estimates, the richest 10 percent of U.S. households account for over 80 percent of American stock ownership. The richest 1 percent by themselves own half of that, or 40 percent of stock. Half of Americans own no stock at all.

Once you understand this, the media’s stock market mania is maddeningly hilarious. It’s as though half of the national news was yammering about the weather in Greenwich, Connecticut. (“Our top story on ABC World News Tonight: This afternoon Greenwich was unseasonably warm.”) And no one notices how bizarre this is.

By contrast, think about economic facts with concrete relevance to the lives of normal people: the unemployment rate, whether the middle class is getting raises, if the minimum wage is going up, strikes, health care, workplace safety. There’s no cable TV ticker about that.

Coronavirus Matters, the Stock Market Doesn’t, and Thinking It Does May Literally Kill Us (The Intercept)

Paul Krugman also points to the lack of alternatives for actual productive investment:

Investors are buying stocks in part because they have nowhere else to go. In fact, there’s a sense in which stocks are strong precisely because the economy as a whole is so weak. What, after all, is the main alternative to investing in stocks? Buying bonds. Yet these days bonds offer incredibly low returns. The interest rate on 10-year U.S. government bonds is only 0.6 percent, down from more than 3 percent in late 2018. If you want bonds that are protected against future inflation, their yield is minus half a percent. So buying stock in companies that are still profitable despite the Covid-19 recession looks pretty attractive.

And why are interest rates so low? Because the bond market expects the economy to be depressed for years to come, and believes that the Federal Reserve will continue pursuing easy-money policies for the foreseeable future. As I said, there’s a sense in which stocks are strong precisely because the real economy is weak.

Crashing Economy, Rising Stocks: What’s Going On? (New York Times via Reddit)

And, of course, the Federal reserve is pumping staggering amounts money into the stock market, buying up assets all over the place. By some measures, several trillion have been spent buying up assets such as stocks, bonds and other securities. Strangely, neither Joe Biden nor Donald Trump ever once asked howyagunnapayforit—they only do that for policies that benefit anyone outside the investor class.

Now, to the point: Nearly all financial trading nowadays is done by bots. That is, computers trading with each other to get the best deal. The technical tern for this is fintech (financial tech).

I tried to find out how long the average stock is held today. I couldn’t find it. I’ve heard everything from four months to 22 seconds. The only point of agreement is that they are being held for shorter and shorter time periods over the years, and trading volume has increased by a big amount (just how big is also hard to discern). But just how short is a mystery. Michael Hudson apparently buys the 22 seconds figure:

Michael Hudson, a former Wall Street economist at Chase Manhattan Bank who also helped establish the world’s first sovereign debt fund recently said: “Take any stock in the United States. The average time in which you hold a stock is – it’s gone up from 20 seconds to 22 seconds in the last year. “Most trades are computerised. Most trades are short-term. The average foreign currency investment lasts – it’s up now to 30 seconds, up from 28 seconds last month. The financial sector is short term, yet they talk as if they’re long term.”

Computerised high-frequency trading, which makes up about 70pc of all trades, is the subject of the book, The Fear Index, published late last year.

https://www.telegraph.co.uk/finance/personalfinance/investing/9021946/How-long-does-the-average-share-holding-last-Just-22-seconds.html

However, this Business Insider article disputes some of those figures, citing the original source, but it doesn’t give any alternative figures. I would imagine BI doesn’t want people to start questioning the stock market, and Hudson is very plugged in to the finance and economic worlds, so I think his figures can be trusted. Most financial reporting that us “ordinary people” can find via Google is designed to prop up the legitimacy of the casino by pulling the wool over our eyes.

This is just anecdote, but I was talking with a friend who works in IT. He was talking about someone he knew who worked in tech in the financial sector. He told me that this person moved one of the computers from one office to another office closer to the fiber optic line to shave off 15 femtoseconds from trading speed. Yes, femotoseconds, that’s what he said. To save you the search, a femtosecond is one quadrillionth of a second (10-15)

Now, bots are obviously unaware that the world is melting down around them. How could they? They have only one goal: buy and sell stocks to maximize value—almost like the mythical Paperclip Maximizer of AI paranoia.

What’s the cardinal rule of stock buying? Buy low and sell high.

We humans tend to do the opposite thanks to cognitive biases such as loss aversion and the Bandwagon Effect. We see stocks going up and we want to buy. We see stocks going down and we get spooked, so we sell. That is, we do the opposite of what we should rationally be doing—we tend to buy high and sell low.

So that’s why the financial industry turned to computers.

Computers do not have the irrational biases that fallible humans do. That’s why they are considered better. And that’s why trading is increasingly done by these computers, often fortified with some sort of AI to game the system using algorithms developed by “quants”. I’ve repeatedly heard that the trading pits where you see all those angry, overweight white dudes in ties screaming at each other like a troop of rabid baboons until they’re beet-red in the face, is kept open only as a sort of performance theater for the masses—there’s no actual trading going on there anymore. It’s all run by computers.

It’s hard to get good data on this; there seems to be a lot of secrecy surrounding it. Presumably they need to prop up the legitimacy of the “democratic” stock market for us average rubes.

But if you’re a bot and you’re designed to buy low, what happens when stocks start dropping in price? When the price is dropping, that means stocks are cheaper. When this happens to fairly good (esp. blue-chip) stocks, that means that they are undervalued. So what do you do? You buy!

Of course, you don’t know the real world of actual people and things “out there” is melting down, because you’re just a brain in a box.

So the computer brain in the box just sees “undervalued stocks” and thinks “buy”. And then other bots see this and they buy too. They follow their instructions. And then they sell the stocks to each other. Wash, rinse repeat. Almost like a simulation.

And, voilà, the stock market goes up. Trading goes on as normal. Paradoxically, the lower the prices go, the more undervalued the stocks appear to the bots, and so the more they buy expecting to get a bargain. So buying activity actually increases! And then the bots buy and sell the stocks to each other, until they go back up to more-or-less where they were before, which they read as the “correct” price, because they are just brains in boxes, with no knowledge that meatspace is quickly sliding into the depths of hell.

A video game score is the appropriate analogy here. What “real world” thing is your video game score tied to? Does it care whether you’re sick or unemployed? No, it only cares whether you’re playing, and the number is just a made up number based on the whim of a bunch of computer programmers somewhere.

So, the reason stocks are still so high, in my theory, aside from just the Fed money bazooka, is because trading is done by machines with simple mandates based on number crunching, blissfully unaware of the real world going on outside the box, which is riddled with pandemic disease, increasing violence, insurrection, social breakdown, economic depression, and environmental collapse.

I imagine a time in the far future where, after the oceans have risen and coastal cities drowned, when the Amazon rain forest has been slashed and burned and billions flee parts of the globe too hot for human metabolism, after 70 percent of terrestrial animals have gone extinct and we’re scraping the ocean floor for methane hydrates to run our remaining power stations, the computers will still be happily swapping stocks between each other, trading away in the darkness, unmonitored, sending the Dow to 100,000,000, or whatever imaginary nonsense number it will be in the future.

That’s my theory, anyway. However, I have no specialist knowledge in either trading or fintech, so I might be way off on this. If by some unlikely circumstance, someone with actual inside knowledge of either computerized trading or the stock market happens to read this, please let me know if this theory holds any water or is total bullsh!t. Thanks.

BONUS:

Presented without comment:

https://twitter.com/yashar/status/1269048818739212288?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1269051241306263553&ref_url=https%3A%2F%2Fboingboing.net%2F2020%2F06%2F06%2Ffox-news-graphic-police-killi.html

Civilization Never Changes

I’m glad I was able to recall where I read this fact:

When humans start treating animals as subordinates, it becomes easier to do the same thing to one another. The first city-states in Mesopotamia were built on this principle of transferring methods of control from creatures to human beings, according to the archaeologist Guillermo Algaze at the University of California in San Diego. Scribes used the same categories to describe captives and temple workers as they used for state-owned cattle.

How domestication changes species, including the human (Aeon)

Because it sets this up perfectly:

Do I even need to comment? Plus ça change, plus c’est la même chose…

Have We Entered the Ages of Discord?

Peter Turchin, of Secular Cycles fame, predicted that political violence and discord in the United States would reach a peak in 2020 (or thereabouts). He even put that prediction in writing in a book entitled Ages of Discord:

In 2010 I made the prediction that the United States will experience a period of heightened social and political instability during the 2020s…Structural-demographic theory (SDT) suggests that the violence spike of the 2020s will be worse than the one around 1970, and perhaps as bad as the last big spike during the 1920s. Thus, the expectation is that there will be more than 100 events per 5 years. In terms of the second metric, we should expect more than 5 fatalities per 1 million of population per 5 years, if the theory is correct.

And there you have it. If violence doesn’t exceed these thresholds by 2025, then SDT is wrong.

A Quantitative Prediction for Political Violence in the 2020s (Cliodynamica)

And the 1970s were pretty bad. From a review of Ages of Discord at Slate Star Codex:

The 1970s underground wasn’t small. It was hundreds of people becoming urban guerrillas. Bombing buildings: the Pentagon, the Capitol, courthouses, restaurants, corporations. Robbing banks. Assassinating police. People really thought that revolution was imminent, and thought violence would bring it about.

Book Review: Ages Of Discord (Slate Star Codex)

See also Coronavirus and Our Age of Discord (Cliodynamica)

There are several general trends during the pre-crisis phase that make the rise and spread of pandemics more likely. At the most basic level, sustained population growth results in greater population density, which increases the basic reproduction number of nearly all diseases. Even more importantly, labor oversupply, resulting from overpopulation, depresses wages and incomes for most. Immiseration, especially its biological aspects, makes people less capable of fighting off pathogens. People in search of jobs move more and increasingly concentrate in the cities, which become breeding grounds for disease. Because of greater movement between regions, it is easy for disease to jump between cities.

Elites, who enjoy growing incomes resulting from low worker wages, spend them on luxuries, including exotic ones. This drives long-distance trade, which more tightly connects distant world regions. My 2008 article is primarily about this process, which we call “pre-modern globalizations.” As a result, a particularly aggressive pathogen arising in, for example, China, can rapidly jump to Europe.

Finally, when the crisis breaks out, it brings about a wave on internal warfare. Marauding armies of soldiers, rebels, and brigands, themselves become incubators of disease that they spread widely as they travel through the landscape.

This description is tailored to pre-modern (and early modern) Ages of Discord. Today, in 2020, details are different. But the main drivers — globalization and popular immiseration — are the same…

Right now Turchin is starting to look like Nostradamus. He hasn’t addressed this so far on his blog, but I’m interested to hear his take.

One thing I wonder about though: the police state is so much more powerful than it was in the 1970s due to digital surveillance technology. I mean, everyone carries around a device that tracks all their movements all the time, and the few who don’t will be noticeable by their absence. Cameras are everywhere. Our online presence is constantly monitored, as Edward Snowden revealed (I don’t think it’s a coincidence that laws prohibiting government monitoring of the citizenry have been repealed just in the last few weeks). Plus, the systems of cybernetic control for managing large populations are so much more sophisticated, as Adam Curtis described in All Watched Over By Machines of Loving Grace. I think these cybernetic systems also foment discord as well, since they allow segments of the population to live in completely separate realities managed by different sets of elites—there is no consensus reality anymore, as responses to the pandemic showed.

But it does seem like an alarming number of people have been disenfranchised and have no constructive outlet for their anger, and no effective recourse for changing the system anymore. Add to that Great Depression levels of popular immiseration while elites are being bailed out with unlimited funds. This is what happens when you make peaceful revolution impossible—violent revolution becomes inevitable.

UPDATE: Turchin’s latest post (June 1)

What is much more certain is that the deep structural drivers for instability continue to operate unabated. Worse, the Covid-19 pandemic exacerbated several of these instability drivers. This means that even after the current wave of indignation, caused by the killing of George Floyd, subsides, there will be other triggers that will continue to spark more fires—as long as the structural forces, undermining the stability of our society, continue to provide abundant fuel for them.

Archaeology/Anthropology Roundup


I want to get back to some of the topics I’ve left hanging, but first I’d like to mention a few other topics that have been sadly neglected during the whole—er, pandemic thing—but that we frequently discuss here on the blog. Specifically archaeology and architecture. This one will be about archaeology.

I want to highlight something that came out about a month ago that you’re probably aware of. If not, here it is: the Amazon rain forest has been found to be one of the cradles of agriculture.

The original cradles of agriculture described in history textbooks were the great river valley of Mesopotamia between the Tigris and Euphrates rivers, along with the Nile valley. As archaeology expanded from its European origins, the Indus river valley in India/Pakistan and the Yellow river valley in China were included as cradles of agriculture. Then came New World sources of maize and potatoes in Central and South America. In recent years, archaeologists have included a few other places, notably Papua New Guinea. Now, it seems we can add the Amazon rain forest to the list:

There’s a small and exclusive list of places where crop cultivation first got started in the ancient world – and it looks as though that list might have another entry, according to new research of curious ‘islands’ in the Amazon basin.

The savannah of the Llanos de Moxos in northern Bolivia is littered with thousands of patches of forest, rising a few feet above the surrounding wetlands. Many of these forest islands, as researchers call them, are thought to be the remnants of human habitation from the early and mid-Holocene.

Now, thanks to new analysis of the sediment found in some of these islands, researchers have unearthed signs that these spots were used to grow cassava (manioc) and squash a little over 10,000 years ago.

That’s impressive, as this timing places them some 8,000 years earlier than scientists had previously found evidence for, indicating that the people who lived in this part of the world – the southwestern corner of the Amazon basin – got a head start on farming practices.

In fact, the findings suggest that southwestern Amazonia can now join China, the Middle East, Mesoamerica, and the Andes as one of the areas where organised plant growing first got going – in the words of the research team, “one of the most important cultural transitions in human history”.

Strange Forest Patches Littering The Amazon Point to Agriculture 10,000 Years Ago (Science Alert)

The researchers were able to identify evidence of manioc (cassava, yuca) that were grown 10,350 years ago. Squash appears 10,250 years ago, and maize more recently – just 6,850 years ago.

“This is quite surprising,” said Dr [Umberto] Lombardo. “This is Amazonia, this is one of these places that a few years ago we thought to be like a virgin forest, an untouched environment. Now we’re finding this evidence that people were living there 10,500 years ago, and they started practising cultivation.”

The people who lived at this time probably also survived on sweet potato and peanuts, as well as fish and large herbivores. The researchers say it’s likely that the humans who lived here may have brought their plants with them.They believe their study is another example of the global impact of the environmental changes being felt as the world warmed up at the end of the last ice age.

“It’s interesting in that it confirms again that domestication begins at the start of the Holocene period, when we have this climate change that we see as we exit from the ice age,” said Dr Lombardo. “We entered this warm period, when all over the world at the same time, people start cultivating.”

Crops were cultivated in regions of the Amazon ‘10,000 years ago’ (BBC)

Note that what is grown appears to be vegetable plants like cassava, yucca and squash, and not cereal grains. Recall James Scott’s point that annual cereal grains were a starting point for civilizations, as they were preservable and ripened at the same rate at the same time, making them confiscatable and by central authorities. Cultures that subsisted on perishable garden plants, however, could escape the trap of civilization.

Here’s a major study that ties into the feasting theory: the first beer was brewed a part of funerary rites for the dead:

The first beer was for the dead. That’s according to a 2018 study of stone vessels from Raqefet Cave in Israel, a 13,000-year-old graveyard containing roughly 30 burials of the Natufian culture. On three limestone mortars, archaeologists found wear and tear and plant molecules, interpreted as evidence of alcohol production. Given the cemetery setting, researchers propose grog was made during funerary rituals in the cave, as an offering to the dearly departed and refreshment for the living. Raqefet’s beer would predate farming in the Near East by as much as 2,000 years — and booze production, globally, by some 4,000 years.

The beer hypothesis, published in the Journal of Archaeological Science: Reports, comes from Raqefet excavators, based at Israel’s University of Haifa, and Stanford University scientists, who conducted microscopic analyses. In previous research, they made experimental brews the ancient way, to see how the process altered artifacts. Some telltale signs were then identified on Raqefet stones: A roughly 10-inch diameter mortar, carved directly into the cave floor, had micro-scratches — probably from a wooden pestle — and starch with damage indicative of mashing, heating and fermenting, all steps in alcohol production. Two funnel-shaped stones had traces of cereals, legumes and flax, interpreted as evidence that they were once lined with woven baskets and used to store grains and other beer ingredients. Lead author Li Liu thinks Natufians also made bread, but that these three vessels were for beer — the earliest yet discovered.

Was the First Beer Brewed for the Dead? (Discover)

The counterpoint is that they were baking bead instead, leading back to the old question: what were grains first cultivated for, beer or bread? My suspicion is the former, with the latter being an effective use of “surplus” resources, or a backup strategy in the case of food shortages.

The connection between beer-brewing and funerary rites is significant, however. The feasting theory of inequality’s origins doesn’t go into much detail about why such feasts were held. But if such rituals feasts were held as a means of commemorating the dead—most likely tied to ancestor worship—then the existence of such events takes on additional importance.

When I talked about the history of cities and the feasting theory, I noted that these seem to have taken place in ritual areas that were marked off (sacred versus profane) for the purposes of feasting and trade, and where multiple different cultures would coalesce and mingle. At such locations, both feasting and trading were carried out. These locations appear to have played a crucial role in human social development, and they’ve been found all over the world. Archaeologists have been studying one in Florida:

More than a thousand years ago, people from across the Southeast regularly traveled to a small island on Florida’s Gulf Coast to bond over oysters, likely as a means of coping with climate change and social upheaval.

Archaeologists’ analysis of present-day Roberts Island, about 50 miles north of Tampa Bay, showed that ancient people continued their centuries-long tradition of meeting to socialize and feast, even after an unknown crisis around A.D. 650 triggered the abandonment of most other such ceremonial sites in the region. For the next 400 years, out-of-towners made trips to the island, where shell mounds and a stepped pyramid were maintained by a small group of locals. But unlike the lavish spreads of the past, the menu primarily consisted of oysters, possibly a reflection of lower sea levels and cool, dry conditions.

During tough times, ancient ‘tourists’ sought solace in Florida oyster feasts (Phys.org)

So I guess Florida has always been a magnet for tourists.

And although Stonehenge is well-known, much less known is Pömmelte, “Germany’s Stonehenge”.

Starting in April, an about-4,000-year-old settlement will be excavated to provide insights into Early Bronze Age life. Settlements of this size have not yet been found at the related henges in the British Isles.

Pömmelte is a ring-shaped sanctuary with earth walls, ditches and wooden piles that is located in the northeastern part of Germany, south of Magdeburg. The site is very much reminiscent of the world-famous monument Stonehenge, and it is likely that the people there performed very similar rituals to those of their counterparts in what is now Britain 4,300 years ago.

Who lived near Pömmelte, the ‘German Stonehenge’? (DW)

This place reminds me a lot of Woodhenge at the Cahokia complex (Wikipedia), which I was able to visit a few years ago. The presence of such similar structures separated across vast times and places (precluding any chance of cultural contact) is something that we need to think deeply about.

From the article above, I also learned about the Nebra Sky Disc (Wikipedia). Recall that the first cities were trying to replicate a “cosmic order” here on earth.

Related: Hunter-gatherer networks accelerated human evolution (Science Daily)

Humans began developing a complex culture as early as the Stone Age. This development was brought about by social interactions between various groups of hunters and gatherers, a UZH study has now confirmed…

The researchers equipped 53 adult Agta living in woodland in seven interconnected residential camps with tracking devices and recorded every social interaction between members of the different camps over a period of one month. The researchers also did the same for a different group, who lived on the coast….The team of researchers then developed a computer model of this social structure and simulated the complex cultural creation of a plant-based medicinal product.

In this fictitious scenario, the people shared their knowledge of medicinal plants with every encounter and combined this knowledge to develop better remedies. This process gradually leads to the development of a highly effective new medicinal product. According to the researchers’ simulation, an average of 250 (woodland camps) to 500 (coastal camps) rounds of social interactions were required for the medicinal product to emerge.

And see: Social Networks and Cooperation in Hunter-Gatherers (NCBI)

A lesser-known megalithic necropolis: the Ħal Saflieni Hypogeum (Wikpedia) 5,000 years ago. Do these look like they were built by people who were filthy and starving?

Related: I only recently heard about this site, but apparently there was a significant industrial complex devoted to the manufacture of flint tools that functioned during the stone age, and well into the Bronze and Iron ages: Grimes Graves (Wikipedia). This gives great insight into the fact that complex specialization of labor and regional comparative advantage have always been with us; they weren’t invented at the time of Smith or Ricardo. We just didn’t fetishize them the way we do now.

And the salt mines of Hallstatt in modern-day Germany have been used for thousands of years since the Bronze Age as well. Apparently, mining required child labor:

Mining there began at least 7,000 years ago and continues modestly today. That makes the UNESCO World Heritage site “the oldest industrial landscape in the world [that’s] still producing,” says [archaeologist Hans] Reschreiter, who has led excavations at Hallstatt for nearly two decades.

But the mine’s peak was during the Bronze and Iron ages, when salt’s sky-high value made Hallstatt one of Europe’s wealthiest communities. Archaeologists understand a great deal about operations then, thanks to an extraordinary hoard of artifacts including leather sacks, food scraps, human feces and millions of used torches.

Many of the finds are made of perishable materials that are usually quick to decay. They survived in the mine’s tunnels because salt is a preservative — the very reason it was in such high demand during Hallstatt’s heyday.

Among the artifacts, the small shoes and caps showed children were in the mine. But researchers needed more evidence to determine whether the young ones were merely tagging along with working parents or actually mining.

To understand the children’s roles, Austrian Academy of Sciences anthropologist Doris Pany-Kucera turned to their graves. In a study of 99 adults from Hallstatt’s cemetery, she found skeletal markers of muscle strain and injury, suggesting many villagers performed hard labor — some from an early age.

Then, in 2019, she reported her analysis of the remains of 15 children and teenagers, finding signs of repetitive work. Children as young as 6 suffered arthritis of the elbow, knee and spine. Several had fractured skulls or were missing bits of bone, snapped from a joint under severe strain. Vertebrae were worn or compressed on all individuals.

Combining clues from the Hallstatt bones and artifacts, researchers traced the children’s possible contributions to the salt industry. They believe the youngest children — 3- to 4-year-olds — may have held the torches necessary for light. By age 8, kids likely assumed hauling and crawling duties, carrying supplies atop their heads or shimmying through crevices too narrow for grown-ups…

The Ancient Practice of Child Labor Is Coming to Light (Discover)

Add this point is important:

It’s no surprise that the young labored at Hallstatt. Children are, and always have been, essential contributors to community and family work. A childhood of play and formal education is a relatively modern concept that even today exists mostly in wealthy societies.

There are those who say that, despite all our technological advancements, we haven’t really reduced the need for human labor. But that’s clearly untrue! We’ve already effectively eliminated the labor of everyone under 18, and from a practical standpoint, nearly everyone over 21. We just forget it because it’s been normalized, but people younger than 18 have labored all throughout human history, even into the early twentieth century. Now they are no longer needed or wanted. And with ever more schooling required for jobs, we’re just increasing the age requirement to enter the workforce. Note that “retirement”—to the extent that it continues to exist—is also a modern phenomenon, eliminating people over 55/60 from the workforce. Labor has most certainly been eliminated, and will continue to be.

Neanderthals and humans co-existed in Europe much longer than we previously thought. (Guardian)

A reminder that many of the earliest human habitats are under the water: Early humans thrived in this drowned South African landscape (Phys.org)

Archaeologists analyzed an ancient cemetery in Hungary, with the distinctly unique elongated skulls the Huns were known for:

They found that Mözs-Icsei dűlő was a remarkably diverse community and were able to identify three distinct groups across two or three generations (96 burials total) until the abandonment of Mözs cemetery around 470 AD: a small local founder group, with graves built in a brick-lined Roman style; a foreign group of twelve individuals of similar isotopic and cultural background, who appear to have arrived around a decade after the founders and may have helped establish the traditions of grave goods and skull deformation seen in later burials; and a group of later burials featuring mingled Roman and various foreign traditions.

51 individuals total, including adult males, females, and children, had artificially deformed skulls with depressions shaped by bandage wrappings, making Mözs-Icsei dűlő one of the largest concentrations of this cultural phenomenon in the region. The strontium isotope ratios at Mözs-Icsei dűlő were also significantly more variable than those of animal remains and prehistoric burials uncovered in the same geographic region of the Carpathian Basin, and indicate that most of Mözs’ adult population lived elsewhere during their childhood. Moreover, carbon and nitrogen isotope data attest to remarkable contributions of millet to the human diet.

Deformed skulls in an ancient cemetery reveal a multicultural community in transition (Phys.org)

See also: Strange, elongated skulls reveal medieval Bulgarian brides were traded for politics (Science)

Speaking of burials: Researchers found 1,000 year old burials in Siberia wearing copper masks: Mummified by accident in copper masks almost 1,000 years ago: but who were they? (Siberian Times) I thought this was fascinating, due to the fact that copper has been shown to kill Coronaviruses, and we have been told to wear masks to prevent transmission. Copper-infused masks are becoming popular (a Google search turned up the above article). Coincidence? Probably.

Religion in South America:

An ancient group of people made ritual offerings to supernatural deities near the Island of the Sun in Lake Titicaca, Bolivia, about 500 years earlier than the Incas, according to an international team of researchers. The team’s findings suggest that organized religion emerged much earlier in the region than previously thought.

Rise of religion pre-dates Incas at Lake Titicaca (phys.org)

This is possibly the coolest scientific study ever conducted: a group of scientists have reconstructed Bronze Age fighting techniques by looking at the wear marks on Bronze Age weapons and armor. Wow! Time to redo that famous fight scene from Troy?

While a graduate student at Newcastle University, [University of Göttingen archaeologist Raphael Hermann] recruited members of a local club devoted to recreating and teaching medieval European combat styles, and asked them to duel with the replicas, using motions found in combat manuals written in the Middle Ages. After recording the combat sequences using high-speed cameras, the researchers noted the type and location of dents and notches left after each clash.

The team assigned characteristic wear patterns to specific sword moves and combinations. If the motions left the same distinctive marks found on Bronze Age swords, Hermann says, it was highly likely that Bronze Age warriors had also used those moves. For example, marks on the replica swords made by a technique known to medieval German duelists as versetzen, or “displacement”—locking blades in an effort to control and dominate an opponent’s weapon—were identical to distinct bulges found on swords from Bronze Age Italy and Great Britain.

Next, Hermann and colleagues put 110 Bronze Age swords from Italy and Great Britain under a microscope and cataloged more than 2500 wear marks. Wear patterns were linked to geography and time, suggesting distinct fighting styles developed over centuries… Displacement, for example, didn’t show up until 1300 B.C.E. and appeared in Italy several centuries before it did in Great Britain.

“In order to fight the way the marks show, there has to be a lot of training involved,” Hermann says. Because the marks are so consistent from sword to sword, they suggest different warriors weren’t swinging at random, but were using well-practiced techniques. Christian Horn, an archaeologist at the University of Gothenburg who was not involved in the research, agrees, and says the experiments offer quantitative evidence of things archaeologists had only been able to speculate about.

Sword-wielding scientists show how ancient fighting techniques spread across Bronze Age Europe (Science Magazine)

This is also important from a historical standpoint: it indicates that the Bronze Age likely saw the rise of a class of professional fighters, as opposed to the all-hands-on-deck mêlée fighting style of all adult males that probably characterized Stone Age warfare. Because fighting became “professionalized” due to the existence of these bronze weapons–which required extensive training to use effectively—the use of force passed into the hands of a specialist warrior caste who were able to impose their will on lesser-armed populations.

This probably explains at least some of the origins of inequality, as those who specialized in the use of violence (as opposed to farming or trading) could then perforce become a ruling class. Inequality always rises when the means of force become confined to a specific class of people. Note also that money in coined form was first invented to pay specialist mercenaries in the Greek states of Asia Minor. These mercenaries were likely the ones who were training in the intensive combat techniques described by the study above.

Related: Medieval battles weren’t as chaotic as people think nor as movies portray! (Reddit) Given how humans react to violence psychologically, how would medieval battles really look, as opposed to the battle scenes depicted in movies? (Hint: not like a mosh pit)

Possibly related: : Modern men are wimps, according to new book (Phys.org). Controversial, but likely correct; our ancestors had much more physical lives and the less fit would not have reproduced as well. My unprovable notion is that we became so effective at warfare that the most violent people would have died off in these types of conflicts, leading to more placid people having a reproductive advantage. Thus, we become less violent over time.

Definitely related: What Compelled the Roman Way of Warfare? Killing for the Republic (Real Clear Defense)

Any polity can field an army through compulsion or other violent means. What matters more is what makes your average person choose to stay on the battlefield. [Steele] Brand argues the Roman Republic motivated its soldiers by publicly honoring at all times the initiative, strength, discipline, perseverance, courage, and loyalty of individual citizens. Moreover, it was this combination of public and private values, flexible political institutions, and a tailored upbringing that gradually culminated in the superiority of the Roman legion against the arguably technically superior Macedonian phalanx at Pydna. Brand calls the entirety of this system “civic militarism,” defined as “self defense writ large for the state.”

Paging Dr. Julian Jaynes: Majority of authors ‘hear’ their characters speak, finds study (Guardian). See also The Origin of Consciousness Reading Companion Part 1 (Put a Number On It)

Collaspe files:

…a new movement called “collapsology”—which warns of the possible collapse of our societies as we know them—is gaining ground.

With climate change exposing how unsustainable the economic and social model based on fossil fuels is, they fear orthodox thinking may be speeding us to our doom.

The theory first emerged from France’s Momentum Institute, and was popularised by a 2015 book, “How Everything Can Collapse”. Some of its supporters, like former French environment minister Yves Cochet, believe the coronavirus crisis is another sign of impending catastrophe.

While the mathematician, who founded France’s Green party, “still hesitates” about saying whether the virus will be the catalyst for a domino effect, he quoted the quip that “it’s too early to say if it’s too late”.

Yet Cochet—whose book “Before the Collapse” predicts a meltdown in the next decade—is convinced that the virus will lead to “a global economic crisis of greater severity than has been imagined”.

The 74-year-old, who retired to France’s rural Brittany region so he could live more sustainably, is also worried about an impending “global disaster with lots of victims, both economic and otherwise”.

“What is happening now is a symptom of a whole series of weaknesses,” warned Professor Yves Citton of Paris VIII University.

“It isn’t the end of the world but a warning about something that has already been set in motion,” he told AFP, “a whole series of collapses that have begun”.

The slide may be slow, said Jean-Marc Jancovici, who heads the Shift Project think-tank which aims to “free economics from carbon”.

But “a little step has been taken (with the virus) that there is no going back”, he argued.

Others have a more chilling take.

“The big lesson of history… and of the Horsemen of the Apocalypse is that pestilence, war and famine tend to follow in each others’ wake,” said Pablo Servigne, an ecologist and agricultural engineer who co-wrote “How Everything Can Collapse”.

“We have a pandemic which could lead to another shock—wars, conflicts and famines,” he added.

“And famines will make us more vulnerable to other pandemics.”

‘Collapsology’: Is this the end of civilisation as we know it? (Phys.org)

The last ice age (or Last Glacial Maximum) peaked around 26,000 years ago. The earth warmed over the coming millennia, driven by an increase in radiation from the sun due to changes in the earth’s orbit (the Milankovic cycles) amplified by CO₂ released from warming water, which further warmed the atmosphere.

But even as the earth warmed it was interrupted by cooler periods known as “stadials”. These were caused by melt water from melting ice sheets which cool large regions of the ocean.

Marked climate variability and extreme weather events during the early Holocene retarded development of sustainable agriculture.

Sparse human settlements existed about 12,000 – 11,000 years ago. The flourishing of human civilisation from about 10,000 years ago, and in particular from 7,000 years ago, critically depended on stabilisation of climate conditions which allowed planting and harvesting of seed and growing of crops, facilitating growth of villages and towns and thereby of civilisation.

Peak warming periods early in the Holocene were associated with prevalence of heavy monsoons and heavy floods, likely reflected by Noah’s ark story.

The climate stabilised about 7,000 – 5,000 years ago. This allowed the flourishing of civilisations along the Nile, Tigris, Euphrates, Indus and the Yellow River.

The ancient river valley civilisations cultivation depended on flow and ebb cycles, in turn dependent on seasonal rains and melting snows in the mountain sources of the rivers. These formed the conditions for production of excess food.

When such conditions declined due to droughts or floods, civilisations collapsed. Examples include the decline of the Egyptian, Mesopotamian and Indus civilisations about 4,200 years ago due to severe drought.

Throughout the Holocene relatively warm periods, such as the Medieval Warm Period (900-1200 AD), and cold periods, such as the Little Ice Age (around 1600 – 1700 AD), led to agricultural crises with consequent hunger, epidemics and wars. A classic account of the consequences of these events is presented in the book Collapse by Jared Diamond.

It’s not just Middle Eastern civilisations. Across the globe and throughout history the rise and fall of civilisations such as the Maya in Central America, the Tiwanaku in Peru, and the Khmer Empire in Cambodia, have been determined by the ebb and flow of droughts and floods.

Greenhouse gas levels were stable or declined between 8,000-6,000 years ago, but then began to rise slowly after 6,000 years ago. According to William Ruddiman at the University of Virginia, this rise in greenhouse gases was due to deforestation, burning and land clearing by people. This stopped the decline in greenhouse gases and ultimately prevented the next ice age. If so, human-caused climate change began much earlier than we usually think.

Rise and fall in solar radiation continued to shift the climate. The Medieval Warm Period was driven by an increase in solar radiation, while the Little Ice Age was caused at least in part by a decrease.

Now we’ve changed the game again by releasing over 600 billion tonnes of carbon into the atmosphere since the Industrial Revolution, raising CO₂ concentrations from around 270 parts per million to about 400 parts per million…

Climate and the rise and fall of civilizations: a lesson from the past (The Conversation)