I want to get back to some of the topics I’ve left hanging, but first I’d like to mention a few other topics that have been sadly neglected during the whole—er, pandemic thing—but that we frequently discuss here on the blog. Specifically archaeology and architecture. This one will be about archaeology.
I want to highlight something that came out about a month ago that you’re probably aware of. If not, here it is: the Amazon rain forest has been found to be one of the cradles of agriculture.
The original cradles of agriculture described in history textbooks were the great river valley of Mesopotamia between the Tigris and Euphrates rivers, along with the Nile valley. As archaeology expanded from its European origins, the Indus river valley in India/Pakistan and the Yellow river valley in China were included as cradles of agriculture. Then came New World sources of maize and potatoes in Central and South America. In recent years, archaeologists have included a few other places, notably Papua New Guinea. Now, it seems we can add the Amazon rain forest to the list:
There’s a small and exclusive list of places where crop cultivation first got started in the ancient world – and it looks as though that list might have another entry, according to new research of curious ‘islands’ in the Amazon basin.
The savannah of the Llanos de Moxos in northern Bolivia is littered with thousands of patches of forest, rising a few feet above the surrounding wetlands. Many of these forest islands, as researchers call them, are thought to be the remnants of human habitation from the early and mid-Holocene.
Now, thanks to new analysis of the sediment found in some of these islands, researchers have unearthed signs that these spots were used to grow cassava (manioc) and squash a little over 10,000 years ago.
That’s impressive, as this timing places them some 8,000 years earlier than scientists had previously found evidence for, indicating that the people who lived in this part of the world – the southwestern corner of the Amazon basin – got a head start on farming practices.
In fact, the findings suggest that southwestern Amazonia can now join China, the Middle East, Mesoamerica, and the Andes as one of the areas where organised plant growing first got going – in the words of the research team, “one of the most important cultural transitions in human history”.
The researchers were able to identify evidence of manioc (cassava, yuca) that were grown 10,350 years ago. Squash appears 10,250 years ago, and maize more recently – just 6,850 years ago.
“This is quite surprising,” said Dr [Umberto] Lombardo. “This is Amazonia, this is one of these places that a few years ago we thought to be like a virgin forest, an untouched environment. Now we’re finding this evidence that people were living there 10,500 years ago, and they started practising cultivation.”
The people who lived at this time probably also survived on sweet potato and peanuts, as well as fish and large herbivores. The researchers say it’s likely that the humans who lived here may have brought their plants with them.They believe their study is another example of the global impact of the environmental changes being felt as the world warmed up at the end of the last ice age.
“It’s interesting in that it confirms again that domestication begins at the start of the Holocene period, when we have this climate change that we see as we exit from the ice age,” said Dr Lombardo. “We entered this warm period, when all over the world at the same time, people start cultivating.”
Note that what is grown appears to be vegetable plants like cassava, yucca and squash, and not cereal grains. Recall James Scott’s point that annual cereal grains were a starting point for civilizations, as they were preservable and ripened at the same rate at the same time, making them confiscatable and by central authorities. Cultures that subsisted on perishable garden plants, however, could escape the trap of civilization.
Here’s a major study that ties into the feasting theory: the first beer was brewed a part of funerary rites for the dead:
The first beer was for the dead. That’s according to a 2018 study of stone vessels from Raqefet Cave in Israel, a 13,000-year-old graveyard containing roughly 30 burials of the Natufian culture. On three limestone mortars, archaeologists found wear and tear and plant molecules, interpreted as evidence of alcohol production. Given the cemetery setting, researchers propose grog was made during funerary rituals in the cave, as an offering to the dearly departed and refreshment for the living. Raqefet’s beer would predate farming in the Near East by as much as 2,000 years — and booze production, globally, by some 4,000 years.
The beer hypothesis, published in the Journal of Archaeological Science: Reports, comes from Raqefet excavators, based at Israel’s University of Haifa, and Stanford University scientists, who conducted microscopic analyses. In previous research, they made experimental brews the ancient way, to see how the process altered artifacts. Some telltale signs were then identified on Raqefet stones: A roughly 10-inch diameter mortar, carved directly into the cave floor, had micro-scratches — probably from a wooden pestle — and starch with damage indicative of mashing, heating and fermenting, all steps in alcohol production. Two funnel-shaped stones had traces of cereals, legumes and flax, interpreted as evidence that they were once lined with woven baskets and used to store grains and other beer ingredients. Lead author Li Liu thinks Natufians also made bread, but that these three vessels were for beer — the earliest yet discovered.
The counterpoint is that they were baking bead instead, leading back to the old question: what were grains first cultivated for, beer or bread? My suspicion is the former, with the latter being an effective use of “surplus” resources, or a backup strategy in the case of food shortages.
The connection between beer-brewing and funerary rites is significant, however. The feasting theory of inequality’s origins doesn’t go into much detail about why such feasts were held. But if such rituals feasts were held as a means of commemorating the dead—most likely tied to ancestor worship—then the existence of such events takes on additional importance.
When I talked about the history of cities and the feasting theory, I noted that these seem to have taken place in ritual areas that were marked off (sacred versus profane) for the purposes of feasting and trade, and where multiple different cultures would coalesce and mingle. At such locations, both feasting and trading were carried out. These locations appear to have played a crucial role in human social development, and they’ve been found all over the world. Archaeologists have been studying one in Florida:
More than a thousand years ago, people from across the Southeast regularly traveled to a small island on Florida’s Gulf Coast to bond over oysters, likely as a means of coping with climate change and social upheaval.
Archaeologists’ analysis of present-day Roberts Island, about 50 miles north of Tampa Bay, showed that ancient people continued their centuries-long tradition of meeting to socialize and feast, even after an unknown crisis around A.D. 650 triggered the abandonment of most other such ceremonial sites in the region. For the next 400 years, out-of-towners made trips to the island, where shell mounds and a stepped pyramid were maintained by a small group of locals. But unlike the lavish spreads of the past, the menu primarily consisted of oysters, possibly a reflection of lower sea levels and cool, dry conditions.
So I guess Florida has always been a magnet for tourists.
And although Stonehenge is well-known, much less known is Pömmelte, “Germany’s Stonehenge”.
Starting in April, an about-4,000-year-old settlement will be excavated to provide insights into Early Bronze Age life. Settlements of this size have not yet been found at the related henges in the British Isles.
Pömmelte is a ring-shaped sanctuary with earth walls, ditches and wooden piles that is located in the northeastern part of Germany, south of Magdeburg. The site is very much reminiscent of the world-famous monument Stonehenge, and it is likely that the people there performed very similar rituals to those of their counterparts in what is now Britain 4,300 years ago.
This place reminds me a lot of Woodhenge at the Cahokia complex (Wikipedia), which I was able to visit a few years ago. The presence of such similar structures separated across vast times and places (precluding any chance of cultural contact) is something that we need to think deeply about.
From the article above, I also learned about the Nebra Sky Disc (Wikipedia). Recall that the first cities were trying to replicate a “cosmic order” here on earth.
Humans began developing a complex culture as early as the Stone Age. This development was brought about by social interactions between various groups of hunters and gatherers, a UZH study has now confirmed…
The researchers equipped 53 adult Agta living in woodland in seven interconnected residential camps with tracking devices and recorded every social interaction between members of the different camps over a period of one month. The researchers also did the same for a different group, who lived on the coast….The team of researchers then developed a computer model of this social structure and simulated the complex cultural creation of a plant-based medicinal product.
In this fictitious scenario, the people shared their knowledge of medicinal plants with every encounter and combined this knowledge to develop better remedies. This process gradually leads to the development of a highly effective new medicinal product. According to the researchers’ simulation, an average of 250 (woodland camps) to 500 (coastal camps) rounds of social interactions were required for the medicinal product to emerge.
A lesser-known megalithic necropolis: the Ħal Saflieni Hypogeum (Wikpedia) 5,000 years ago. Do these look like they were built by people who were filthy and starving?
Related: I only recently heard about this site, but apparently there was a significant industrial complex devoted to the manufacture of flint tools that functioned during the stone age, and well into the Bronze and Iron ages: Grimes Graves (Wikipedia). This gives great insight into the fact that complex specialization of labor and regional comparative advantage have always been with us; they weren’t invented at the time of Smith or Ricardo. We just didn’t fetishize them the way we do now.
And the salt mines of Hallstatt in modern-day Germany have been used for thousands of years since the Bronze Age as well. Apparently, mining required child labor:
Mining there began at least 7,000 years ago and continues modestly today. That makes the UNESCO World Heritage site “the oldest industrial landscape in the world [that’s] still producing,” says [archaeologist Hans] Reschreiter, who has led excavations at Hallstatt for nearly two decades.
But the mine’s peak was during the Bronze and Iron ages, when salt’s sky-high value made Hallstatt one of Europe’s wealthiest communities. Archaeologists understand a great deal about operations then, thanks to an extraordinary hoard of artifacts including leather sacks, food scraps, human feces and millions of used torches.
Many of the finds are made of perishable materials that are usually quick to decay. They survived in the mine’s tunnels because salt is a preservative — the very reason it was in such high demand during Hallstatt’s heyday.
Among the artifacts, the small shoes and caps showed children were in the mine. But researchers needed more evidence to determine whether the young ones were merely tagging along with working parents or actually mining.
To understand the children’s roles, Austrian Academy of Sciences anthropologist Doris Pany-Kucera turned to their graves. In a study of 99 adults from Hallstatt’s cemetery, she found skeletal markers of muscle strain and injury, suggesting many villagers performed hard labor — some from an early age.
Then, in 2019, she reported her analysis of the remains of 15 children and teenagers, finding signs of repetitive work. Children as young as 6 suffered arthritis of the elbow, knee and spine. Several had fractured skulls or were missing bits of bone, snapped from a joint under severe strain. Vertebrae were worn or compressed on all individuals.
Combining clues from the Hallstatt bones and artifacts, researchers traced the children’s possible contributions to the salt industry. They believe the youngest children — 3- to 4-year-olds — may have held the torches necessary for light. By age 8, kids likely assumed hauling and crawling duties, carrying supplies atop their heads or shimmying through crevices too narrow for grown-ups…
It’s no surprise that the young labored at Hallstatt. Children are, and always have been, essential contributors to community and family work. A childhood of play and formal education is a relatively modern concept that even today exists mostly in wealthy societies.
There are those who say that, despite all our technological advancements, we haven’t really reduced the need for human labor. But that’s clearly untrue! We’ve already effectively eliminated the labor of everyone under 18, and from a practical standpoint, nearly everyone over 21. We just forget it because it’s been normalized, but people younger than 18 have labored all throughout human history, even into the early twentieth century. Now they are no longer needed or wanted. And with ever more schooling required for jobs, we’re just increasing the age requirement to enter the workforce. Note that “retirement”—to the extent that it continues to exist—is also a modern phenomenon, eliminating people over 55/60 from the workforce. Labor has most certainly been eliminated, and will continue to be.
Archaeologists analyzed an ancient cemetery in Hungary, with the distinctly unique elongated skulls the Huns were known for:
They found that Mözs-Icsei dűlő was a remarkably diverse community and were able to identify three distinct groups across two or three generations (96 burials total) until the abandonment of Mözs cemetery around 470 AD: a small local founder group, with graves built in a brick-lined Roman style; a foreign group of twelve individuals of similar isotopic and cultural background, who appear to have arrived around a decade after the founders and may have helped establish the traditions of grave goods and skull deformation seen in later burials; and a group of later burials featuring mingled Roman and various foreign traditions.
51 individuals total, including adult males, females, and children, had artificially deformed skulls with depressions shaped by bandage wrappings, making Mözs-Icsei dűlő one of the largest concentrations of this cultural phenomenon in the region. The strontium isotope ratios at Mözs-Icsei dűlő were also significantly more variable than those of animal remains and prehistoric burials uncovered in the same geographic region of the Carpathian Basin, and indicate that most of Mözs’ adult population lived elsewhere during their childhood. Moreover, carbon and nitrogen isotope data attest to remarkable contributions of millet to the human diet.
Speaking of burials: Researchers found 1,000 year old burials in Siberia wearing copper masks: Mummified by accident in copper masks almost 1,000 years ago: but who were they? (Siberian Times) I thought this was fascinating, due to the fact that copper has been shown to kill Coronaviruses, and we have been told to wear masks to prevent transmission. Copper-infused masks are becoming popular (a Google search turned up the above article). Coincidence? Probably.
Religion in South America:
An ancient group of people made ritual offerings to supernatural deities near the Island of the Sun in Lake Titicaca, Bolivia, about 500 years earlier than the Incas, according to an international team of researchers. The team’s findings suggest that organized religion emerged much earlier in the region than previously thought.
This is possibly the coolest scientific study ever conducted: a group of scientists have reconstructed Bronze Age fighting techniques by looking at the wear marks on Bronze Age weapons and armor. Wow! Time to redo that famous fight scene from Troy?
While a graduate student at Newcastle University, [University of Göttingen archaeologist Raphael Hermann] recruited members of a local club devoted to recreating and teaching medieval European combat styles, and asked them to duel with the replicas, using motions found in combat manuals written in the Middle Ages. After recording the combat sequences using high-speed cameras, the researchers noted the type and location of dents and notches left after each clash.
The team assigned characteristic wear patterns to specific sword moves and combinations. If the motions left the same distinctive marks found on Bronze Age swords, Hermann says, it was highly likely that Bronze Age warriors had also used those moves. For example, marks on the replica swords made by a technique known to medieval German duelists as versetzen, or “displacement”—locking blades in an effort to control and dominate an opponent’s weapon—were identical to distinct bulges found on swords from Bronze Age Italy and Great Britain.
Next, Hermann and colleagues put 110 Bronze Age swords from Italy and Great Britain under a microscope and cataloged more than 2500 wear marks. Wear patterns were linked to geography and time, suggesting distinct fighting styles developed over centuries… Displacement, for example, didn’t show up until 1300 B.C.E. and appeared in Italy several centuries before it did in Great Britain.
“In order to fight the way the marks show, there has to be a lot of training involved,” Hermann says. Because the marks are so consistent from sword to sword, they suggest different warriors weren’t swinging at random, but were using well-practiced techniques. Christian Horn, an archaeologist at the University of Gothenburg who was not involved in the research, agrees, and says the experiments offer quantitative evidence of things archaeologists had only been able to speculate about.
This is also important from a historical standpoint: it indicates that the Bronze Age likely saw the rise of a class of professional fighters, as opposed to the all-hands-on-deck mêlée fighting style of all adult males that probably characterized Stone Age warfare. Because fighting became “professionalized” due to the existence of these bronze weapons–which required extensive training to use effectively—the use of force passed into the hands of a specialist warrior caste who were able to impose their will on lesser-armed populations.
This probably explains at least some of the origins of inequality, as those who specialized in the use of violence (as opposed to farming or trading) could then perforce become a ruling class. Inequality always rises when the means of force become confined to a specific class of people. Note also that money in coined form was first invented to pay specialist mercenaries in the Greek states of Asia Minor. These mercenaries were likely the ones who were training in the intensive combat techniques described by the study above.
Possibly related: : Modern men are wimps, according to new book (Phys.org). Controversial, but likely correct; our ancestors had much more physical lives and the less fit would not have reproduced as well. My unprovable notion is that we became so effective at warfare that the most violent people would have died off in these types of conflicts, leading to more placid people having a reproductive advantage. Thus, we become less violent over time.
Any polity can field an army through compulsion or other violent means. What matters more is what makes your average person choose to stay on the battlefield. [Steele] Brand argues the Roman Republic motivated its soldiers by publicly honoring at all times the initiative, strength, discipline, perseverance, courage, and loyalty of individual citizens. Moreover, it was this combination of public and private values, flexible political institutions, and a tailored upbringing that gradually culminated in the superiority of the Roman legion against the arguably technically superior Macedonian phalanx at Pydna. Brand calls the entirety of this system “civic militarism,” defined as “self defense writ large for the state.”
…a new movement called “collapsology”—which warns of the possible collapse of our societies as we know them—is gaining ground.
With climate change exposing how unsustainable the economic and social model based on fossil fuels is, they fear orthodox thinking may be speeding us to our doom.
The theory first emerged from France’s Momentum Institute, and was popularised by a 2015 book, “How Everything Can Collapse”. Some of its supporters, like former French environment minister Yves Cochet, believe the coronavirus crisis is another sign of impending catastrophe.
While the mathematician, who founded France’s Green party, “still hesitates” about saying whether the virus will be the catalyst for a domino effect, he quoted the quip that “it’s too early to say if it’s too late”.
Yet Cochet—whose book “Before the Collapse” predicts a meltdown in the next decade—is convinced that the virus will lead to “a global economic crisis of greater severity than has been imagined”.
The 74-year-old, who retired to France’s rural Brittany region so he could live more sustainably, is also worried about an impending “global disaster with lots of victims, both economic and otherwise”.
“What is happening now is a symptom of a whole series of weaknesses,” warned Professor Yves Citton of Paris VIII University.
“It isn’t the end of the world but a warning about something that has already been set in motion,” he told AFP, “a whole series of collapses that have begun”.
The slide may be slow, said Jean-Marc Jancovici, who heads the Shift Project think-tank which aims to “free economics from carbon”.
But “a little step has been taken (with the virus) that there is no going back”, he argued.
Others have a more chilling take.
“The big lesson of history… and of the Horsemen of the Apocalypse is that pestilence, war and famine tend to follow in each others’ wake,” said Pablo Servigne, an ecologist and agricultural engineer who co-wrote “How Everything Can Collapse”.
“We have a pandemic which could lead to another shock—wars, conflicts and famines,” he added.
“And famines will make us more vulnerable to other pandemics.”
The last ice age (or Last Glacial Maximum) peaked around 26,000 years ago. The earth warmed over the coming millennia, driven by an increase in radiation from the sun due to changes in the earth’s orbit (the Milankovic cycles) amplified by CO₂ released from warming water, which further warmed the atmosphere.
But even as the earth warmed it was interrupted by cooler periods known as “stadials”. These were caused by melt water from melting ice sheets which cool large regions of the ocean.
Marked climate variability and extreme weather events during the early Holocene retarded development of sustainable agriculture.
Sparse human settlements existed about 12,000 – 11,000 years ago. The flourishing of human civilisation from about 10,000 years ago, and in particular from 7,000 years ago, critically depended on stabilisation of climate conditions which allowed planting and harvesting of seed and growing of crops, facilitating growth of villages and towns and thereby of civilisation.
Peak warming periods early in the Holocene were associated with prevalence of heavy monsoons and heavy floods, likely reflected by Noah’s ark story.
The climate stabilised about 7,000 – 5,000 years ago. This allowed the flourishing of civilisations along the Nile, Tigris, Euphrates, Indus and the Yellow River.
The ancient river valley civilisations cultivation depended on flow and ebb cycles, in turn dependent on seasonal rains and melting snows in the mountain sources of the rivers. These formed the conditions for production of excess food.
When such conditions declined due to droughts or floods, civilisations collapsed. Examples include the decline of the Egyptian, Mesopotamian and Indus civilisations about 4,200 years ago due to severe drought.
Throughout the Holocene relatively warm periods, such as the Medieval Warm Period (900-1200 AD), and cold periods, such as the Little Ice Age (around 1600 – 1700 AD), led to agricultural crises with consequent hunger, epidemics and wars. A classic account of the consequences of these events is presented in the book Collapse by Jared Diamond.
It’s not just Middle Eastern civilisations. Across the globe and throughout history the rise and fall of civilisations such as the Maya in Central America, the Tiwanaku in Peru, and the Khmer Empire in Cambodia, have been determined by the ebb and flow of droughts and floods.
Greenhouse gas levels were stable or declined between 8,000-6,000 years ago, but then began to rise slowly after 6,000 years ago. According to William Ruddiman at the University of Virginia, this rise in greenhouse gases was due to deforestation, burning and land clearing by people. This stopped the decline in greenhouse gases and ultimately prevented the next ice age. If so, human-caused climate change began much earlier than we usually think.
Rise and fall in solar radiation continued to shift the climate. The Medieval Warm Period was driven by an increase in solar radiation, while the Little Ice Age was caused at least in part by a decrease.
Now we’ve changed the game again by releasing over 600 billion tonnes of carbon into the atmosphere since the Industrial Revolution, raising CO₂ concentrations from around 270 parts per million to about 400 parts per million…
With the global disruption of COVID-19, there have been a number of stories in news outlets documenting the history of past pandemics in an effort to make sense of it all. One name that has come up frequently is Walter Scheidel. The Stanford University historian wrote a book some years ago which acquired a great deal of attention called “The Great Leveler.” In it, he contended that only catastrophes reduced wealth and income inequality, without which it would grow without bound. One recurring leveler was plagues and pandemics (along with war, famine, collapse and political revolution).
I’ve gone back and cleaned up the typos (the ones I found, anyway). I think these posts are actually quite good (and I’m a terrible critic of myself), so they’re most likely worth a reread (if I do say so myself).
Here’s Scheidel himself writing in The New York Times summarizing the leveling effect he found during pandemics:
…as successive waves of plague shrank the work force, hired hands and tenants “took no notice of the king’s command,” as the Augustinian clergyman Henry Knighton complained. “If anyone wanted to hire them he had to submit to their demands, for either his fruit and standing corn would be lost or he had to pander to the arrogance and greed of the workers.”
As a result of this shift in the balance between labor and capital, we now know…that real incomes of unskilled workers doubled across much of Europe within a few decades. According to tax records that have survived in the archives of many Italian towns, wealth inequality in most of these places plummeted.
In England, workers ate and drank better than they did before the plague and even wore fancy furs that used to be reserved for their betters. At the same time, higher wages and lower rents squeezed landlords, many of whom failed to hold on to their inherited privilege. Before long, there were fewer lords and knights, endowed with smaller fortunes, than there had been when the plague first struck…
In all of these cases, he notes, the elites pushed back. They weren’t content with their “lessers” having a greater share of the pie (which is, after all, why they were elites):
In late medieval Eastern Europe, from Prussia and Poland to Russia, nobles colluded to impose serfdom on their peasantries to lock down a depleted labor force. This altered the long-term economic outcomes for the entire region: Free labor and thriving cities drove modernization in Western Europe, but in the eastern periphery, development fell behind.
Farther south, the Mamluks of Egypt, a regime of foreign conquerors of Turkic origin, maintained a united front to keep their tight control over the land and continue exploiting the peasantry. The Mamluks forced the dwindling subject population to hand over the same rent payments, in cash and kind, as before the plague. This strategy sent the economy into a tailspin as farmers revolted or abandoned their fields.
The elite pushback often failed in the short-term:
…more often than not, repression failed. The first known plague pandemic in Europe and the Middle East, which started in 541, provides the earliest example. Anticipating the English Ordinance of Laborers by 800 years, the Byzantine emperor Justinian railed against scarce workers who “demand double and triple wages and salaries, in violation of ancient customs” and forbade them “to yield to the detestable passion of avarice” — to charge market wages for their labor. The doubling or tripling of real incomes reported on papyrus documents from the Byzantine province of Egypt leaves no doubt that his decree fell on deaf ears…
During the Great Rising of England’s peasants in 1381, workers demanded, among other things, the right to freely negotiate labor contracts. Nobles and their armed levies put down the revolt by force, in an attempt to coerce people to defer to the old order. But the last vestiges of feudal obligations soon faded. Workers could hold out for better wages, and landlords and employers broke ranks with one another to compete for scarce labor.
And yet, in the long-term, people ended up no better off than they had started:
None of these stories had a happy ending for the masses. When population numbers recovered after the plague of Justinian, the Black Death and the American pandemics, wages slid downward and elites were firmly back in control. Colonial Latin America went on to produce some of the most extreme inequalities on record. In most European societies, disparities in income and wealth rose for four centuries all the way up to the eve of World War I. It was only then that a new great wave of catastrophic upheavals undermined the established order, and economic inequality dropped to lows not witnessed since the Black Death, if not the fall of the Roman Empire.
Here are some other pages from history, in somewhat chronological order:
White and Mordechai focused their efforts on the city of Constantinople, capital of the Roman Empire, which had a comparatively well-described outbreak in 542 CE. Some primary sources claim plague killed up to 300,000 people in the city, which had a population of some 500,000 people at the time. Other sources suggest the plague killed half the empire’s population. Until recently, many scholars accepted this image of mass death. By comparing bubonic, pneumonic, and combined transmission routes, the authors showed that no single transmission route precisely mimicked the outbreak dynamics described in these primary sources.
Heraclitus compared our lot to beasts, winos, deep sleepers and even children – as in, “Our opinions are like toys.” We are incapable of grasping the true logos. History, with rare exceptions, seems to have vindicated him.
There are two key Heraclitus mantras.
1) “All things come to pass according to conflict.” So the basis of everything is turmoil. Everything is in flux. Life is a battleground. (Sun Tzu would approve.)
2) “All things are one.” This means opposites attract. This is what Heraclitus found when he went tripping inside his soul – with no help of lysergic substances. No wonder he faced a Sisyphean task trying to explain this to us, mere children.
And that brings us to the river metaphor. Everything in nature depends on underlying change. Thus, for Heraclitus, “as they step into the same rivers, other and still other waters flow upon them.” So each river is composed of ever-changing waters.
Despite the lack of healthcare and public health measures as we understand them – and we will never know how many plague victims died of neglect, hunger and thirst, or of secondary infections – the plague in medieval England, and Western Europe as a whole, was mediated by a system of research, intellectual authority and technical countermeasures.
But that system was religious, based on the Christian church’s management of the passage of souls from this earth to the next world. The forerunner of the modern emergency vehicle was the bell of the priest’s attendants, advising the dying that relief was at hand, in the form of an expert trained and qualified to take confession and administer the other sacraments that would ensure safe passage, if not to heaven, at least to purgatory.
The dividing line between rich and poor wasn’t so much access to drugs or the best doctors as to post-mortem religious services: the prayers, candles, masses and chantries that were meant to speed the dead to a better hereafter. The technical emergencies the authorities faced weren’t shortages of hospital beds and doctors but of candle wax and confessors. Priests were not immune to the plague.
‘Emergency’, or its Latin equivalent, was the word used by the bishop of Bath and Wells in January 1349, six months after the plague began in England, when he broadcast an urgent message to his flock via the surviving parish priests in his diocese. ‘We understand,’ he wrote, ‘that many people are dying without the sacrament of penance, because they do not know what they ought to do in such an emergency and believe that even in an emergency confession of their sins is no use or worth unless made to [an ordained] priest.’ What they had to do, he told them, was ‘make confession of their sins, according to the teaching of the apostle, to any lay person, even to a woman if a man is not available.’
It’s hard to keep a virulent disease down. The first and biggest burst of plague lasted from the late 1340s until about 1353. Just as the world started thinking things were getting back to normal, another wave hit in 1360. After that there were new waves every 10 years or so. Europe’s population didn’t get back to pre-plague levels for a century and a half.
Quarantining was invented during the first wave of bubonic plague in the 14th century, but it was deployed more systematically during the Great Plague. Public servants called searchers ferreted out new cases of plague, and quarantined sick people along with everyone who shared their homes. People called warders painted a red cross on the doors of quarantined homes, alongside a paper notice that read “LORD HAVE MERCY UPON US.” (Yes, the all-caps was mandatory).
The government supplied food to the housebound. After 40 days, warders painted over the red crosses with white crosses, ordering residents to sterilize their homes with lime. Doctors believed that the bubonic plague was caused by “smells” in the air, so cleaning was always recommended. They had no idea that it was also a good way to get rid of the ticks and fleas that actually spread the contagion.
Of course, not everyone was compliant. Legal documents at the U.K. National Archives show that in April 1665, Charles II ordered severe punishment for a group of people who took the cross and paper off their door “in a riotious manner,” so they could “goe abroad into the street promiscuously, with others.” It’s reminiscent of all those modern Americans who went to the beaches in Florida over spring break, despite what public health experts told them.
Just as some American politicians blame the Chinese for the coronavirus, there were 17th century Brits who blamed the Dutch for spreading the plague. Others blamed Londoners. Mr. Pepys had relocated his family to a country home in Woolwich, and writes in his diary that the locals “are afeard of London, being doubtfull of anything that comes from thence, or that hath lately been there … I was forced to say that I lived wholly at Woolwich.”
In the cold autumn of 1629, the plague came to Italy. It arrived with the German mercenaries (and their fleas) who marched through the Piedmont countryside. The epidemic raged through the north, only slowing when it reached the natural barrier of the Apennines. On the other side of the mountains, Florence braced itself. The officials of the Sanità, the city’s health board, wrote anxiously to their colleagues in Milan, Verona, Venice, in the hope that studying the patterns of contagion would help them protect their city. Reports came from Parma that its ‘inhabitants are reduced to such a state that they are jealous of those who are dead’. The Sanità learned that, in Bologna, officials had forbidden people to discuss the peste, as if they feared you could summon death with a word.
Plague was thought to spread through corrupt air, on the breath of the sick or trapped in soft materials like cloth or wood, so in June 1630 the Sanità stopped the flow of commerce and implemented a cordon sanitaire across the mountain passes of the Apennines. But they soon discovered that the boundary was distressingly permeable. Peasants slipped past bored guards as they played cards. In the dog days of the summer, a chicken-seller fell ill and died in Trespiano, a village in the hills above Florence. The city teetered on the brink of calamity.
By August, Florentines were dying. The archbishop ordered the bells of all the churches in the city to be rung while men and women fell to their knees and prayed for divine intercession. In September, six hundred people were buried in pits outside the city walls. As panic mounted, rumours spread: about malicious ‘anointers’, swirling infection through holy water stoups, about a Sicilian doctor who poisoned his patients with rotten chickens. In October, the number of plague burials rose to more than a thousand. The Sanità opened lazaretti, quarantine centres for the sick and dying, commandeering dozens of monasteries and villas across the Florentine hills. In November, 2100 plague dead were buried. A general quarantine seemed the only answer. In January 1631, the Sanità ordered the majority of citizens to be locked in their homes for forty days under threat of fines and imprisonment.
In his Memoirs of the Plague in Florence, Giovanni Baldinucci described how melancholy it was ‘to see the streets and churches without anybody in them’. As the city fell quiet, ordinary forms of intimacy were forbidden. Two teenage sisters, Maria and Cammilla, took advantage of their mother’s absence in the plague hospital to dance with friends who lived in the same building. When they were discovered, their friends’ parents were taken to prison. At their trial, the mother, Margherita, blamed the two girls: ‘Oh traitors, what have you done?’ Another pair of sisters found relief from the boredom of quarantine by tormenting their brother. Arrested after one of the Sanità’s policemen saw them through an open door, one of them explained in court that ‘in order to pass the time we dressed our brother up in a mask, and we were dancing among ourselves, and while he was … dressed up like that, the corporal passed by … and saw what was going on inside the house.’ Dancing and dressing up were treacherous actions, violating the Sanità’s measures to control movement, contact, breath. But loneliness afflicted people too…
The poor were judged not only careless but physically culpable, their bodies frustratingly vulnerable to disease. The early decades of the 17th century in Europe saw widespread famines, sky-high grain prices, declining wages, political breakdown and violent religious conflicts. (This is the ‘general crisis of the 17th century’ that Important Male Historians like to debate.) One Florentine administrator, surveying the surrounding countryside, reported that even before the epidemic struck, villages were ‘full of people, who feed themselves with myrtle berries, acorns and grasses, and whom one sees along the roads seeming like corpses who walk’. The city was not much better. A diarist in Florence in 1630 noted the ‘many poor children who eat the stalks of cabbages that they find on the street, as though, through their hunger, they seem like fruit’. Famine was compounded by the steep decline of the textile industry in the city, as producers in England, Holland and Spain undercut prices; the number of wool workshops halved between 1596 and 1626. These long, lean years of unemployment and hunger had left Florentines acutely susceptible to the coming epidemic.
The Sanità arranged the delivery of food, wine and firewood to the homes of the quarantined (30,452 of them). Each quarantined person received a daily allowance of two loaves of bread and half a boccale (around a pint) of wine. On Sundays, Mondays and Thursdays, they were given meat. On Tuesdays, they got a sausage seasoned with pepper, fennel and rosemary. On Wednesdays, Fridays and Saturdays, rice and cheese were delivered; on Friday, a salad of sweet and bitter herbs. The Sanità spent an enormous amount of money on food because they thought that the diet of the poor made them especially vulnerable to infection, but not everyone thought it was a good idea. Rondinelli recorded that some elite Florentines worried that quarantine ‘would give [the poor] the opportunity to be lazy and lose the desire to work, having for forty days been provided abundantly for all their needs’.
The provision of medicine was also expensive. Every morning, hundreds of people in the lazaretti were prescribed theriac concoctions, liquors mixed with ground pearls or crushed scorpions, and bitter lemon cordials. The Sanità did devolve some tasks to the city’s confraternities. The brothers of San Michele Arcangelo conducted a housing survey to identify possible sources of contagion; the members of the Archconfraternity of the Misericordia transported the sick in perfumed willow biers from their homes to the lazaretti. But mostly, the city government footed the bill. Historians now interpret this extensive spending on public health as evidence of the state’s benevolence: if tracts like Righi’s brim over with intolerance towards the poor, the account books of the Sanità tell an unflashy story of good intentions.
But the Sanità – making use of its own police force, court and prison – also punished those who broke quarantine. Its court heard 566 cases between September 1630 and July 1631, with the majority of offenders – 60 per cent – arrested, imprisoned, and later released without a fine. A further 11 per cent were imprisoned and fined. On the one hand, the majority of offenders were spared the harshest penalties, of corporal punishment or exile. On the other, being imprisoned in the middle of a plague epidemic was potentially lethal; and the fines levied contributed to the operational budget of the public health system. The Sanità’s lavish spending on food and medicine suggests compassion in the face of poverty and suffering. But was it kindness, if those salads and sausages were partly paid for by the same desperate people they were intended to help? The Sanità’s intentions may have been virtuous, but they were nevertheless shaped by an intractable perception of the poor as thoughtless and lazy, opportunists who took advantage of the state of emergency.
Early modern historians used to be interested in the idea of the ‘world turned upside down’: in moments of inversion during carnival when a pauper king was crowned and the pressures of a deeply unequal society released. But what emerges from the tangle of stories in John Henderson’s book is a sense that for many the world stood still during the plague. The disease waned in the early summer of 1631 and, in June, Florentines emerged onto the streets to take part in a Corpus Christi procession, thanking God for their reprieve. When the epidemic finally ended, about 12 per cent of the population of Florence had died. This was a considerably lower mortality rate than other Italian cities: in Venice 33 per cent of the population; in Milan 46 per cent; while the mortality rate in Verona was 61 per cent. Was the disease less virulent in Florence or did the Sanità’s measures work? Percentages tell us something about living and dying. But they don’t tell us much about survival. Florentines understood the dangers, but gambled with their lives anyway: out of boredom, desire, habit, grief…
Florence Under Siege: Surviving Plague in an Early Modern City by John Henderson.
The majority of the population feared and condemned inoculation. Even many of those who were in favor of it were torn by doubts and religious scruples. Was inoculation a “lawful” practice? Was smallpox not a “judgement of God,” sent to punish and humble the people for their sins? Was being inoculated not like “taking God’s Work out of His Hand”?
Douglass played upon such popular scruples to the apparent discomfiture of his clerical opponents. Turning to the ministers he challenged them to determine, as a “Case of Conscience,” how placing more trust in human measures than in God was consistent with the devotion and subjection owed to the all-wise providence of the Lord. That he had not raised this issue in good faith becomes evident from a passage contained in a private letter suggesting jeeringly that his correspondent might perhaps admire how the clergy reconciled inoculation with their doctrine of predestination…
Ever since she had accompanied her husband on a diplomatic mission to Turkey, where she had become acquainted with inoculation and convinced of its merits, it had been Lady Mary Wortley Montagu’s ambition to bring “this useful invention into fashion in England.” That the country’s best medical minds had not sanctioned the practice did not deter Lady Mary. She bided her time. In the 1721 epidemic she asked Charles Maitland, the physician who four years earlier had inoculated her young son in Constantinople, to perform the operation now on her little daughter. She also enlisted the interest of the Princess of Wales, at whose request the King agreed to pardon a number of prisoners who were under sentence of death if they submitted to inoculation. Six convicts in Newgate Prison were ready to do so, and on August 9, about the time Boylston was injecting his patients, they were inoculated by Maitland. The results at first were good. The ice had been broken and during the next months further persons underwent inoculation at his hands. The culmination of Lady Mary’s crusade was the inoculation of the daughters of the Prince and Princess of Wales…
With improvement in its techniques, inoculation gained increasing favor as a method for the prophylaxis of smallpox until it finally, nearly eighty years later, gave way to Jenner’s magnificent discovery of vaccination.
Asiatic cholera, one of humanity’s greatest scourges in the modern period, came to Europe for the first time in the years after 1817, traveling by ship and caravan route from the banks of the Ganges, where it was endemic, to the Persian Gulf, Mesopotamia and Iran, the Caspian Sea and southern Russia, and then—thanks to troop movements occasioned by Russia’s wars against Persia and Turkey in the late 1820s and its suppression of the revolt in Poland in 1830–1831—to the shores of the Baltic Sea. From there its spread westward was swift and devastating, and before the end of 1833 it had ravaged the German states, France, and the British Isles and passed on to Canada, the western and southern parts of the United States, and Mexico.
Typhoid was a killer but it belonged to another world. The disease thrived in the overcrowded, insanitary conditions of New York’s slums, such as Five Points, Prospect Hill and Hell’s Kitchen. The family of one of the victims hired a researcher called George Soper and the diligent Mr Soper proved to be Mary’s nemesis – even though when he first tracked her down she chased him out of her kitchen with a carving fork. And that’s part of the problem with Mary.
It’s possible to sympathise with her refusal to believe that she could be transmitting a disease from which she never suffered herself. But Mr Soper had correctly identified her as an asymptomatic carrier of Typhoid fever. She would never get the disease herself but would never stop giving it to other people.
Not surprisingly, Mary Mallon found this impossible to understand. But the New York authorities were desperate and in 1907 Mary was exiled to the isolation facility on North Brother Island in the river outside New York.
At the end of the 19th century, one in seven people around the world had died of tuberculosis, and the disease ranked as the third leading cause of death in the United States. While physicians had begun to accept German physician Robert Koch’s scientific confirmation that TB was caused by bacteria, this understanding was slow to catch on among the general public, and most people gave little attention to the behaviors that contributed to disease transmission. They didn’t understand that things they did could make them sick.
In his book, Pulmonary Tuberculosis: Its Modern Prophylaxis and the Treatment in Special Institutions and at Home, S. Adolphus Knopf, an early TB specialist who practiced medicine in New York, wrote that he had once observed several of his patients sipping from the same glass as other passengers on a train, even as “they coughed and expectorated a good deal.” It was common for family members, or even strangers, to share a drinking cup.
With Knopf’s guidance, in the 1890s the New York City Health Department launched a massive campaign to educate the public and reduce transmission. The “War on Tuberculosis” public health campaign discouraged cup-sharing and prompted states to ban spitting inside public buildings and transit and on sidewalks and other outdoor spaces—instead encouraging the use of special spittoons, to be carefully cleaned on a regular basis. Before long, spitting in public spaces came to be considered uncouth, and swigging from shared bottles was frowned upon as well. These changes in public behavior helped successfully reduce the prevalence of tuberculosis.
Hassler shared his doubts about a closure order, but suggested that a short closure order would “limit most of all the cases to the home and give the other places a chance to thoroughly clean up and thus we may bring about a condition that will reduce the number of cases.” Several in attendance felt that a general closure order would induce panic in the people, would be costly, and would not stop the spread of the epidemic. Theater owners and dance hall operators supported a closure order, hoping that it would bring a quick end to the epidemic that was already causing a drastic reduction in revenue (one owner estimated that his receipts had fallen off 40% since the start of the epidemic). After some discussion, the Board of Health voted to close all places of public amusement, ban all lodge meetings, close all public and private schools, and to prohibit all dances and other social gatherings effective at 1:00 am on Friday, October 18. The Board did not close churches, but instead recommended that services and socials be either discontinued during the epidemic or held in the open air. City police were given a list of the restrictions and directed to ensure compliance with the order. The Liberty Loan drive, always the concern of citizens as they tried to outdo other cities in fundraising, would be allowed to continue by permit, as would all public meetings.
Despite the closure order and gathering ban, the centerpiece of San Francisco’s crusade against influenza was the face mask. Several other cities also mandated their use, and many more recommended them for private citizens as well as for physicians, nurses, and attendants who cared for the ill. But it was San Francisco that pushed for the early and widespread use of masks as a way to prevent the spread of the dread malady. On October 18, the day that the other health measures went into effect, Hassler ordered that all barbers wear masks while with customers, and recommended clerks who came into contact with the general public also don them. The next day, Hassler added hotel and rooming house employees, bank tellers, druggists, store clerks, and any other person serving the public to the list of those required to wear masks. Citizens were again strongly urged to wear masks while in public. On October 21, the Board of Health met and issued a strong recommendation to all residents to wear a mask while in public.
The wearing of a mask immediately became of a symbol of wartime patriotism…
It’s difficult to say where this pandemic is leading. On the one hand, it has revealed the extent to which the most essential workers of our society are underpaid and undervalued. It has shown how dependent we are on transient and undocumented workers who are routinely brutalized, especially in the food system. It has exposed the dark underbelly of how food ends up on our shelves and how fragile our food system really is. It has led to an upsurge in union activism and strikes. It has demonstrated the fragility of long, just-in-time supply chains and the downside of outsourcing absolutely everything, such that no one country can produce anything anymore.
It has laid bare the cracks in our society. It has shown that the philosophy of “small government” promoted by billionaires and corporations is a disaster in times of crisis. It has shown that the pattern of crippling and hobbling state and local governments in favor of empowering markets and wealthy private actors is counterproductive. It has shown the utter folly of tying the basics of life to formal employment, such as housing and health care. It has shown that depending on “free markets” for absolutely everything doesn’t work when those markets shut down due to inevitable crises. It has shown the fecklessness and incompetence of America’s leaders, as well as their amorality and bottomless greed.
Yet it has also empowered authoritarians and dictators the world over. It has superempowered the ability of states to track and monitor their citizens. It has devastated local economies and small businesses, while shifting wealth, power, and economic activity to transnational corporations who have access to unlimited money from captured governments. It has led to an upsurge in activity among the extremist far-right and well-armed and organized Fascist militias. The stock market reaches a new high every time the unemployment rate goes up, while the financial industry is bailed out. Unemployment is at Great Depression levels, while workers in the U.S. are told by politicians to fend for themselves. “Essential” workers are ordered back to work or threatened with benefit cut-offs. To date, it has increased inequality.
It has also reduced pollution levels and crippled much of air travel, perhaps forever. It has substantially reduced demand for fossil fuels, even as prices reach all-time lows. It has caused cities to close off streets and avenues to cars in favor of bicycles and pedestrians. It has increased the viability of working from home.
In short, it’s complicated. But much of what happens will be up to us. Will we become more extremist, authoritarian and unequal? Will we continue to embrace the Social Darwinism promoted by our betters? Or will we demand essential workers be paid better, unions to no longer be suppressed, working hours to drop, commuting to go away, streets to be prioritized to bikes, and the government spend its trillions on helping the average citizen rather than just big corporations and the investor class? It could go either way. Walter Scheidel concludes:
In looking for illumination from the past on our current pandemic, we must be wary of superficial analogies. Even in the worst-case scenario, Covid-19 will kill a far smaller share of the world’s population than any of these earlier disasters did, and it will touch the active work force and the next generation even more lightly. Labor won’t become scarce enough to drive up wages, nor will the value of real estate plummet. And our economies no longer rely on farmland and manual labor.
Yet the most important lesson of history endures. The impact of any pandemic goes well beyond lives lost and commerce curtailed. Today, America faces a fundamental choice between defending the status quo and embracing progressive change. The current crisis could prompt redistributive reforms akin to those triggered by the Great Depression and World War II, unless entrenched interests prove too powerful to overcome.
We’ve talked extensively about how the basic constituent of human society is the extended kinship group. In many parts of the world, this is still the default form of human social organization. If there is any “natural” form of human social organization discernible from evolutionary biology, this is it.
From it all the basic structures of traditional societies are derived: religion, politics, law, marriage, inheritance, etc. We’ve frequently mentioned Henry Sumner Maine’s book, Ancient Law. The entire book can be summed up in the following passages:
[A]rchaic law … is full, in all its provinces, of the clearest indications that society in primitive times was not what it is assumed to be at present, a collection of *individuals*. In fact, and in the view of the men who composed it, it was an *aggregation of families*. The contrast may be most forcibly expressed by saying that the *unit* of an ancient society was the Family, of a modern society the Individual.
We must be prepared to find in ancient law all the consequences of this difference. It is so framed as to be adjusted to a system of small independent corporations. It is therefore scanty, because it is supplemented by the despotic commands of the heads of households. It is ceremonious, because the transactions to which it pays regard resemble international concerns much more than the quick play of intercourse between individuals. Above all it has a peculiarity of which the full importance cannot be shown at present.
It takes a view of *life* wholly unlike any which appears in developed jurisprudence. Corporations *never die*, and accordingly primitive law considers the entities with which it deals, i. e. the patriarchal or family groups, as perpetual and inextinguishable. This view is closely allied to the peculiar aspect under which, in very ancient times, moral attributes present themselves.
The moral elevation and moral debasement of the individual appear to be confounded with, or postponed to, the merits and offences of the group to which the individual belongs. If the community sins, its guilt is much more than the sum of the offences committed by its members; the crime is a corporate act, and extends in its consequences to many more persons than have shared in its actual perpetration. If, on the other hand, the individual is conspicuously guilty, it is his children, his kinsfolk, his tribesmen, or his fellow-citizens, who suffer with him, and sometimes for him.
It thus happens that the ideas of moral responsibility and retribution often seem to be more clearly realised at very ancient than at more advanced periods, for, as the family group is immortal, and its liability to punishment indefinite, the primitive mind is not perplexed by the questions which become troublesome as soon as the individual is conceived as altogether separate from the group.
On the difference between laws based on lone individuals, and laws based on social groups, he writes:
…It will be observed, that the acts and motives which these theories [of jurisprudence] suppose are the acts and motives of Individuals. It is each Individual who for himself subscribes the Social Compact. It is some shifting sandbank in which the grains are Individual men, that according to the theory of Hobbes is hardened into the social rock by the wholesome discipline of force…
But Ancient Law, it must again be repeated, knows next to nothing of Individuals. It is concerned not with Individuals, but with Families, not with single human beings, but groups. Even when the law of the State has succeeded in permeating the small circles of kindred into which it had originally no means of penetrating, the view it takes of Individuals is curiously different from that taken by jurisprudence in its maturest stage. The life of each citizen is not regarded as limited by birth and death; it is but a continuation of the existence of his forefathers, and it will be prolonged in the existence of his descendants…
As we saw last time, these are called identity rules, as opposed to personal rules, which deal mainly with specific, unique, individuals; and general rules, which theoretically apply to everyone equally, regardless of one’s rank, kinship group, ethnic background, religious beliefs, wealth, or any other intrinsic characteristic.
Last time we saw that general rules came about because it became impossible for rulers to sort people by religion after the Catholic Church fragmented, despite numerous failed attempts by “all the king’s horses and all the king’s men” to put Humpty Dumpty back together again. Religious minorities began springing up all over Europe like mushrooms after a rain, challenging the old ways of ruling. Martin Luther only wanted to reform the universal Church; instead he broke it apart. Luther’s emphasis on a personal relationship with God through reading the Bible directly (something that was only possible in Early Modernity), meant that the intermediaries between man and God—the Church and priesthood—saw their power and influence diminish. This, in turn, empowered ambitious Early Modern rulers.
General rules supplanted the ancient laws described by Maine above, leading to a more fragmented and individualistic society. This, in turn, allowed for the commodification of land and labor which is necessary for capitalism. For example, the selling off of the monasteries seems to have kickstarted off the first large real estate markets in England. As Maine argued, status became replaced by contract; Gemeinschaft became supplanted by Gesellschaft.
But, in reality, individualism in Europe was under way long before that.
Europe has long shown a curious lack of extended kinship groups, that is, tribes. If you’ve read Roman history, you know that the Western Empire came under pressure by large migrations of tribal peoples that we subsume under the label “Germanic”, due to their languages, along with some other exotic breeds like the Asiatic Huns (the ancestors of modern-day Hungarians). Their tribal structure, from what little we can determine, seems to have been quite similar to tribal peoples the world over, including in North America, Africa, and Asia.
I’m sure you can recall the names of some of them: the Lombards, the Alemanni, the Burgundians, the Lombards, the Visigoths, the Ostrogoths, the Frisians, the Angles and Saxons, the Beans and Franks, and many, many more. The Goths managed to devastate the Roman Empire despite their mopey attitudes and all black clothing, while the Vandals left spraypaint up and down the Iberian peninsula and down into North Africa.
As I said last time, ancient societies were collectivist by default. But this all changed, particularly in Western Europe. But why Europe? Why was Europe the apparent birthplace of this radically new way of life?
That’s the subject of the paper I’m discussing today, which has received a fairly large amount of press attention. The paper itself is 178 pages—basically a small book (although much of that is data). The idea is that these extended kinship groups were broken up by the Roman Catholic Church via it’s strict prohibition against marriages between close kin, especially between cousins.
[A] new study traces the origins of contemporary individualism to the powerful influence of the Catholic Church in Europe more than 1,000 years ago, during the Middle Ages.
According to the researchers, strict church policies on marriage and family structure completely upended existing social norms and led to what they call “global psychological variation,” major changes in behavior and thinking that transformed the very nature of the European populations.
The study, published this week in Science, combines anthropology, psychology and history to track the evolution of the West, as we know it, from its roots in “kin-based” societies. The antecedents consisted of clans, derived from networks of tightly interconnected ties, that cultivated conformity, obedience and in-group loyalty—while displaying less trust and fairness with strangers and discouraging independence and analytic thinking.
The engine of that evolution, the authors propose, was the church’s obsession with incest and its determination to wipe out the marriages between cousins that those societies were built on. The result, the paper says, was the rise of “small, nuclear households, weak family ties, and residential mobility,” along with less conformity, more individuality, and, ultimately, a set of values and a psychological outlook that characterize the Western world. The impact of this change was clear: the longer a society’s exposure to the church, the greater the effect.
Around A.D. 500, explains Joseph Henrich, chair of Harvard University’s department of human evolutionary biology and senior author of the study, “the Western church, unlike other brands of Christianity and other religions, begins to implement this marriage and family program, which systematically breaks down these clans and kindreds of Europe into monogamous nuclear families. And we make the case that this then results in these psychological differences.”
Although reported as if it were some sort of new discovery, this concept is hardly new. In fact, this hypothesis has been around for quite a long time—since at least the 1980’s. Francis Fukuyma’s book, “The Origins of Political Order,” even has a chapter entitled, “Christianity Undermines the Family,” where he expounds this hypothesis in detail. As another example, the most popular post on the notorious hbd chick’s blog is entitled, whatever happened to european tribes? (hbd chick does not use capital letters), and dates from 2011. She quotes a paper from Avner Grief (whom we met last time): “Family structure, institutions, and growth – the origin and implications of Western corporatism”.
“The medieval church instituted marriage laws and practices that undermined large kinship groups. From as early as the fourth century, it discouraged practices that enlarged the family, such as adoption, polygamy, concubinage, divorce, and remarriage. It severely prohibited marriages among individuals of the same blood (consanguineous marriages), which had constituted a means to create and maintain kinship groups throughout history. The church also curtailed parents’ abilities to retain kinship ties through arranged marriages by prohibiting unions in which the bride didn’t explicitly agree to the union.
“European family structures did not evolve monotonically toward the nuclear family nor was their evolution geographically and socially uniform. However, by the late medieval period the nuclear family was dominate. Even among the Germanic tribes, by the eighth century the term family denoted one’s immediate family, and shortly afterwards tribes were no longer institutionally relevant. Thirteenth-century English court rolls reflect that even cousins were as likely to be in the presence of non-kin as with each other.
Hbd chick speculates as to why this might be the case (again, no caps for her):
the leaders of the church probably instituted these reproductive reforms for their own gain — get rid of extended families and you reduce the number of family members likely to demand a share of someone’s legacy. in other words, the church might get the loot before some distant kin that the dead guy never met does. (same with not allowing widows to remarry. if a widow remarries, her new husband would inherit whatever wealth she had. h*ck, she might even have some kids with her new husband! but, leave her a widow and, if she has no children, it’s more likely she’ll leave more of her wealth to the church.)
but, inadvertently, they also seem to have laid the groundwork for the civilized western world. by banning cousin marriage, tribes disappeared. extended familial ties disappeared. all of the genetic bonds in european society were loosened. society became more “corporate” (which is greif’s main point).
Now, for us Westerners, the idea of marrying your cousin is kind of gross (which might be an additional confirmation of the thesis). If you’re in the United States, jokes about marrying your cousin and inbreeding are common to use against people living in Appalachia. The movie Deliverance cemented this in the popular consciousness.
But if you know anything about anthropology, you know that cousin marriage isn’t all that uncommon around the world; in fact in some societies it’s considered the most desirable match! Societies use kinship terms to distinguish between parallel and cross-cousins. In most societies, cross-cousin marriage is okay (maybe even preferred), but parallel cousin marriage is a no-no. That’s why the term for “sibling” in many languages often encompasses parallel cousins. That is, marrying your parallel cousin is the same as marrying your brother or sister, i.e. it’s incest. What the Church did, then, was greatly expand the definition of incest:
In many societies, differentiated cousin-terms are presriptive of the people one can/should or is forbidden to marry. For example, in the Iroquois kinship terminology, parallel cousins (e.g. father’s brother’s daughter) are likewise called brother and sister–an indication of an incest taboo against parallel cousin-marriage. Cross-cousins (e.g. father’s sister’s daughter) are termed differently and are often preferred marriage partners. 
And, of course, the choice of marriage partners in a hyper-localized world with basically nothing in terms of mass communication and very little in the way of long-distance transportation would have been much more restricted than we are used to. The simple invention of the bicycle in the 1800s caused marriage partners to become more differentiated:
The likelihood of finding a suitable marriage partner depends not only on the degree to which one becomes acquainted with the possible marriage partners in a region but also on the changing boundaries of what constitutes a region. A great many studies, on all parts of the globe, have demonstrated that most people tend to marry someone living close by. On foot in accessible terrain – that is, no mud, rivers, mountains, and gorges – one can perhaps walk 20 kilometers [12.4 miles] to another village and walk the same distance back on the same day.
This distance comes close to the limit of trust that separated the known universe from the “unsafe” world beyond. If marriage “horizons” expanded, young suitors would be able to meet more potential marriage partners. The increase in the means and speed of transportation brought about by new and improved roads and canals, and by new means of transport such as the train, the bicycle, the tram, and the motorcar brought a wider range of potential spouses within reach. These new means of transport increased the distance one could travel during the same day, and thus expanded the geographical marriage horizon. 
Arranged marriages between kin are designed to keep land and wealth in the same extended kinship lines, rather than breaking them up or turning them over to other families. In societies where lineages are ranked, losing such land and property means a downgrade in social status. That’s why you get extreme versions like sibling marriage in ancient Egypt (with the associated birth defects). Even in fairly modern times, European royalty had a very small pool of suitable marriage partners to choose from (Prince Philip is, in fact, a distant cousin of Queen Elizabeth—no jokes about Prince Charles, please).
Although in the modern, developed world, cousin marriage is fairly rare, it’s somewhat more common in societies which are often labelled “traditional”. It does occur among some communities even in the West, however: Did my children die because I married my cousin? (BBC). And I’ve always found a great irony in the fact that Darwin himself married his first cousin.
So, for anthropologists, the prohibition against cousin marriage is a big deal.
WEMP and HL
Anthropologists and historians also discern a different and distinct marriage pattern in medieval Western Europe from much of the rest of the contemporary world; distinct enough to merit the uninspired name of the Western European Marriage Pattern (WEMP). It’s distinctive features are:
– Strict monogamy, i.e. no polygyny. We think of this as normal, but in terms of sheer numbers, most cultures have been polygynous (one man being able to marry multiple wives). Monogamy was the norm for Indo-European cultures even before Christianity (e.g ancient Greece, Rome, India).
– Relatively late age of first marriage. Many cultures married off women at puberty or shortly thereafter – anywhere from 13 to 16 years old. This was seen as necessary in an era of high infant and maternal mortality. But in Europe, both men and women married much later—often in their late twenties, or even older for men. Also, the difference in ages between men and women was slight—typically only a few years. Yet in many parts of the world even today, very young women will be married off to prestigious men who are old enough to be their grandfather! Some people, of course, still lament this change, specifically Judge Roy Moore and everyone involved with Jeffrey Epstein.
– Divorce was difficult to obtain. Marriage was seen as a lifetime commitment, and divorce was accordingly hard to get – just ask Henry the Eighth. Of course, given higher mortality rates – especially in childbirth – in practice this meant “till death do us part” was less of a commitment back then. Today we practice serial monogamy – one partner at a time, but less of a lifetime commitment.
– Marriage was voluntary on the part of the woman. No forced marriages here (unless it was to secure some kind of political alliance).
– Fewer children. Rather than just pump out a litter, European couples had fewer children, yet the population still grew overall. No one is quite sure why, but the relatively high status of women may have had something to do with it. Of course, it’s harder to have a large number of children with just one wife, although some people like J.S. Bach managed to do it. As Wikipedia summarizes, “women married as adults rather than as dependents, often worked before marriage and brought some skills into the marriage, were less likely to be exhausted by constant pregnancy, and were about the same age as their husbands.”
– Neolocal households and “nuclear” families. Leaving your parents’ household and establishing your own separate household is, again, fairly standard for us Westerners, but in many places it is atypical. Married couples often live with their extended families in much of the rest of the world: Africa, Asia, Oceania, etc. Even in eastern Europe it was fairly common for couples to live in an extended family household under the control of a patriarch (leading to all sorts of drama). Speaking of Eastern Europe:
The reason it’s called the *Western* European Marriage Pattern is because there is an imaginary line dividing it from the rest of the continent. The divergence in marriage patterns and inheritance practices was discovered by a demographer called John Hajnal, and hence it is called the Hajnal Line (HL). It runs roughly from Trier to St. Petersburg. Some areas of Western Europe, such as Ireland and parts of southern Europe, are also “outside” the Hajnal line as well.
To the west of the Hajnal line, about half of all women aged 15 to 50 years of age were married at any given time while the other half were widows or spinsters; to the east of the line, about seventy percent of women in that age bracket were married at any given time while the other thirty percent were widows or nuns.
The marriage records of Western and Eastern Europe in the early 20th century illustrate this pattern vividly; west of the Hajnal line, only 25% of women aged 20–24 were married while to the east of the line, over 75% of women in this age group were married and less than five percent of women remained unmarried. Outside of Europe, women could be married even earlier and even fewer would remain celibate; in Korea, practically every woman 50 years of age had been married and spinsters were extremely rare, compared to 10–25% of women in western Europe age 50 who had never married.
The idea is that the difference was brought about by the actions of the Catholic Church. More exposure to the Church meant weaker families and less kinship ties; less exposure meant that the “default” extended family system was maintained.
Furthermore, there are some ideas that follow from that:
– Western Europeans have weaker family ties.
– Western Europeans have a greater sense of individualism and independent thinking, and a correspondingly higher tolerance for deviants and misfits than other cultures.
– Both of these traits were crucial for the development of capitalism.
The idea is that, since extended kinship groups and tribes disappeared, inclusive institutions were formed in Europe by necessity rather than elsewhere. These inclusive institutions, as we saw last time, were critical for the development of general rules and Liberalism. Those developments, in turn, allowed for disruptive institutions of capitalism, as described by Marx, to rework social relations: “all that is solid melts into air.” Those developments led Western Europe to subsequently dominate the modern world. For example, this paper from 2017 by one of the new paper’s co-authors advances the hypothesis that institutional developments gave Western Europe the edge:
Why did Europe pull ahead of the rest of the world? In the year 1000 AD many regions like China or the Middle Easter [sic] were more advanced than Europe. This paper contributes to this debate by testing the hypothesis that the Churches’ [sic] medival [sic] marriage regulations constituted an important precondition for Europe’s exceptional economic development by fostering inclusive institutions. In the medieval period, Churches instituted marriage regulations (most prominently banning kin-marriages) that destroyed extended kin-networks. This allowed the formation of a civic society and inclusive institutions. Consistent with the idea that those marriage regulations were an important precondition for Europe’s institutional development, I present evidence that Western Church exposure already fostered the formation of city level inclusive institutions before 1500 AD
An important building block of the argument is that extended kin networks are detrimental to the formation of a civic society and inclusive institutions. The European kin-structure is unique in the world with the nuclear family dominating and kin marriages are almost absent. In parts of the world, first and second cousin-marriages account for more than 50 percent of all marriages. Kin-marriages lead to social closure and create much tighter family networks compared to less fractionalized societies where the nuclear family dominates for biological, sociological, and economic reasons: kin-selection predicts that the implied higher genetic relatedness increases altruistic behavior towards kin, kin-marriages decrease interaction with and therefore trust in outsiders, and they change economic incentives: supporting one’s nieces and nephews simultaneously benefits the prospective spouses of one’s own children. More importantly, though, in the absence of a supra-level inclusive institutions [sic], the family provides protection and insurance creating a stable equilibrium where individual deviation from loyalty demands is costly. Excessive reliance on the family, nepotism, and other contingencies of strong extended kin-groups in turn impede social cohesion and the formation of states with inclusive institutions.
In line with Acemoglu, Johnson, Robinson and Yared’s notion of critical junctures this paper provides evidence that the Churches’ marriage regulations changed Europe’s social structure by pushing it away from a kin-based society, and paved the way for Europe’s special developmental path. The Churches’ marriage regulations – most prominently the banning of consanguineous marriages (“marriages of the same blood”) – were starting to be imposed in the early medieval ages. Backed by secular rulers, this ban was accompanied by severe punishment of transgressions and was very comprehensive – the Western Church at times prohibited marriages up to the seventh degree of relatedness (that is, marriage between two people sharing one of their 128 great-great-great-great-great-grandparents). Clearly it was impossible to trace and enforce the ban to this degree, yet it demonstrates its severity. The eastern Church also banned cousin-marriage but never to the same extent (providing variation within Christian countries). 
I remember reading an anecdote from Jared Diamond a while ago, and I can’t remember whether it was in an interview or in one of his books (I wish I could find it). He was describing how someone in the village in Papua New Guinea where he was staying wanted to open an ice-cream shop to bring to glory of ice cream to the rest of the village. But this fellow ran into a small problem. In small villages in tribal New Guinea, everyone is basically related to everyone else in some way, however remote. When the budding entrepreneur tried to charge his cousins for an ice cream cone, they reacted with indignity. Charging your relatives for something was considered a severe faux pas! The village was still primarily a reciprocal gift economy. They simply could not get their heads around their concept that they had to pay for stuff. In the end, he could either make a profit or alienate everyone else in the village whom he depended upon. The ice cream shop folded.
Why did the church do this? The authors speculate that it may have been less about scripture, and more self-serving:
…the church’s focus on marriage proscriptions rose to the level of obsession. “They came to the view that marrying and having sex with these relatives, even if they were cousins, was something like sibling incest in that it made God angry,” he says. “And things like plagues were explained as a consequence of God’s dissent.”
The taboo against cousin marriage might have helped the church grow, adds Jonathan Schulz, an assistant professor of economics at George Mason University and first author of the paper. “For example,” he says, “it is easier to convert people once you get rid of ancestral gods. And the way to get rid of ancestral gods is to get rid of their foundation: family organization along lineages and the tracing of ancestral descent.”
While the Hajnal line was discovered back in 1965, it was unknown why marriage was so different west of the line than east of it until a 1983 book by Jack Goody called “The Development of the Family and Marriage in Europe.” Goody was an anthropologist who specialized in marriage customs and inheritance patterns around the world—things like dowries, bridewealth, primogeniture, partiable inheritance, etc. From his study of Medieval Europe, following Hajnal’s discoveries, he was the first to put forward the idea that the Catholic Church’s prohibitions were the critical factor in the demise of the tribal structures and the subsequent rise of Western individualism. This is from Fukuyama’s Origins of Political Order:
Goody notes that the distinctive Western European marriage pattern began to branch off from the dominant Mediterranean pattern by the end of the Roman Empire, The Mediterranean pattern, which included the Roman gens, was strongly agnatic or patrilineal, leading to the segmentary organization of society. The agantic group tended to be endogamous, with some preference for cross-cousin marriage. There was a strict separation of the sexes and little opportunity for the women to own property or participate in the public sphere. The Western European pattern was different in all these respects: inheritance was bilateral; cross-cousin marriage was banned and exogamy promoted; and women had greater rights to property and participation in public events.
The shift was driven by the Catholic church, which took a strong stand against four practices: marriages between close kin, marriages to the widows of dead relatives (the so-called levirate), the adoption of children, and divorce. The Venerable Bede, reporting on the efforts of Pope Gregory I to convert the pagan Anglo-Saxons to Christianity in the sixth century, notes how Gregory explicitly condemned the tribe’s practices of marriage to close relatives and the levirate. Later church edicts forbade concubinage, and promoted an indissoluble, monogamous lifetime marriage bond between men and women.
…The reason that the church took this stand, in Goody’s view, had much more to do with the material interests of the church than with theology. Cross-cousin marriage (or any other form of marriage between close relatives), the levirate, concubinage, adoption, and divorce are a what he labels “strategies of kinship” whereby kinship groups are able to keep property under the group’s control as it is passed down from one generation to another….the church systematically cut off all available avenues had for passing down property to descendants. At the same time, it strongly promoted voluntary donations of land and property to itself. The church stood to benefit materially from an increasing pool of property-owning Christians who died without heirs.
The relatively high status of women in Western Europe was an accidental by-product of the church’s self-interest. The church made it difficult for a widow to remarry within the family group an thereby reconvey her property back to the tribe, so she had to own the property herself. A woman’s right to own property and dispose of it as she wished stood to benefit the church, since it provided a large source of donations from childless widows and spinsters. And the woman’s right to own property spelled the death knell of agantic lineages, by undermining the principle of unilateral descent.
The Catholic church did very well financially in the centuries following these changes in the rules…By the end of the seventh century, one-third of the productive land in France was in ecclesiastical hands, between the eighth and ninth centuries, church holdings in northern France, the German lands, and Italy doubled….The church thus found itself a large property owner, running large manors and overseeing the economic production of serfs throughout Europe. This helped the church in its mission of feeding the hungry and caring for the sick, and it also made possible a vast expansion of the priesthood, monasteries, and convents. But it also necessitated the evolution of an internal managerial hierarchy and set of rules within the church itself that made it an independent political player in medieval politics. 
Despite all this, it remained just a speculative hypothesis, and remained unproven. What Henrich et. al’s paper does is amalgamate a large amount of interdisciplinary data to try and back up the hypothesis. Their idea is that such prohibitions wold have altered the cultural behavior of those societies relative to the ones around them, and that cultural behavior can be detected through things like church records, the use of intermediary financial instruments, the frequency of blood donations, and even unpaid parking fines. By establishing a correlation between Church exposure and these sorts of socio-cultural behaviors, they argue, we can see the roots of the cultural differences between the rest of the world, and what they term WEIRD cultures: Western, Educated, Industrial, Rich, and Democratic.
In the course of their research, Henrich and his colleagues created a database and calculated “the duration of exposure” to the Western church for every country in the world, as well as 440 “subnational European regions.” They then tested their predictions about the influence of the church at three levels: globally, at the national scale; regionally, within European countries; and among the adult children of immigrants in Europe from countries with varying degrees of exposure to the church.
In their comparison of kin-based and church-influenced populations, Henrich and his colleagues identified significant differences in everything from the frequency of blood donations to the use of checks (instead of cash) and the results of classic psychology tests—such as the passenger’s dilemma scenario, which elicits attitudes about telling a lie to help a friend. They even looked at the number of unpaid parking tickets accumulated by delegates to the United Nations…In their analysis of those tickets, the researchers found that over the course of one year, diplomats from countries with higher levels of “kinship intensity”—the prevalence of clans and very tight families in a society—had many more unpaid parking tickets than those from countries without such history.
The West itself is not uniform in kinship intensity. Working with cousin-marriage data from 92 provinces in Italy (derived from church records of requests for dispensations to allow the marriages), the researchers write, they found that “Italians from provinces with higher rates of cousin marriage take more loans from family and friends (instead of from banks), use fewer checks (preferring cash), and keep more of their wealth in cash instead of in banks, stocks, or other financial assets.” They were also observed to make fewer voluntary, unpaid blood donations.
This builds on Henrich’s previous finding that such WEIRD cultures score differently on certain psychological tests than people in cultures in the rest of the world. That paper was a widely-cited bombshell. For years, psychology studies confined themselves to Western Europeans, particularly undergraduate college students where the studies were carried out. It was simply assumed that people thought pretty much the same way everywhere, and therefore Western college students could safely be used as a stand in for humans more generally.
Henrich, an anthropologist, took those studies and gave them to people from diverse tribal peoples around the world, which, remarkably, hadn’t been done before. The results he got indicated that using Westerners—particularly rich, well-educated ones—as stand-ins for the entire human race in psychological tests was fundamentally flawed. We are, in fact, outliers when it comes to human behavior. This has profound implications for economics and sociology.
If Westerners really are different, then why is that? This paper attempts to answer the question.
Kinship vs. Capitalism
Both Max Weber and Karl Marx realized that the destruction of large corporate kinship groups and the separation between the household and the market economy were the prerequisites for later capitalist production. Both traced this change to sometime between the sixteenth and the nineteenth centuries. Weber focused on the culture of Protestantism as the cause, while Marx focused on the changing methods of economic production during the time period, such as Enclosure movement and subsequent explosion of rootless wage laborers. Weber’s ideas were later expanded by sociologist Talcott Parsons. Karl Polanyi also traced the change to Market Society from householding economies and cottage industries to this time frame.
However, a very influential book called “The Origins of English Individualism,” by Alan Macfarlane argued that England was basically an individualistic culture by 1250—long before its continental neighbors.
By shifting the origins of capitalism well before the Black Death, we alter the nature of a number of other problems. One of these is the origin of modern individualism.Those who have written on the subject have always accepted the Marx-Weber chronology. For example, David Riesman assume that modern individualism emerged out of an older collectivist, “traditional-directed” society, in the fifteenth and sixteenth centuries. Its growth was directly related to the Reformation, Renaissance and the break-up of the old feudal world. The “inner-directed” stage of intense individualism occurred in the period between the sixteenth and nineteenth centuries. Though a recent general survey of historical and philosophical writing on individualism concedes that some of the roots lie deep in classical and biblical times and also in medieval mysticism, still, in general, it stresses the Renaissance, Reformation and the Enlightenment as the period of great transition. Many of the strands of political religious, ethical, economic and other types of individualism are traced back to Hobbes, Luther, Calvin and other post-1500 writers.
Yet, if the present thesis is correct, individualism in economic and social life is much older than this in England. In fact within the recorded period covered by our documents, it is not possible to find a time when an Englishman did not stand alone. Symbolized and shaped by his ego-centered kinship system, he stood in the center of the world. This means that it is no longer possible to “explain” the origins of English individualism in terms of either Protestantism, population change, the development of a market economy at the end of the middle ages, or other factors suggested by the writers cited. Individualism, however defined, predates sixteenth-century changes and can be said to shape them all. The explanation must lie elsewhere, but will remain obscure until we trace the origins back even further than attempted in this work. 
Macfarlane claims that already by the thirteenth century, the evidence indicates that England was no longer what he terms a “peasant society,” or what we’ve been referring to as a “traditional society.” Even way back then, he says, England had many characteristics in common with later capitalist societies than with more traditional ones: freeholding of land, wage labor, free choice of marriage partners, individual inheritance, alienable property, geographical and social mobility, and so forth. From a review of the book:
The bulk of this short book is taken up by attempting to demonstrate that the characteristics of peasant society did not apply to England from the thirteenth century onward…In peasant societies land is not individualized but is held by the entire family through time and is seldom sold, since it is greatly revered; in England from the twelfth century onward, land was held by individuals (both men and women) and was often sold to nonfamily members, especially since geographical mobility of families was high and since children were sometimes disinherited.
In peasant societies the unit of ownership (the joint family) is also the unit of production and consumption; in England at the time the nuclear family (rather than the stem or joint family) was predominant, and the children often worked as servants for other families, rather than for their own families.
In peasant society, the families are economically almost self-sufficient, production for the market is small, and cash is scarce; in England at that time, the economy was highly monetized, agricultural production for the market was important, and the existence of elaborate books of accounts of farms attest to their “rational” attitudes toward money making (there was even money-lending for interest in rural areas).
In peasant societies there is a certain income and social equality between families that work on the land and a large gap in income and social status stands between them and other social groups, so that little mobility occurs between classes; in England at that time, considerable differentiation of wealth among the rural workers could be found and, in addition, some mobility between classes occurred.
Finally, in peasant societies women have a low age of marriage, their marriage partners are selected for them, and few remain unmarried; in England at that time, women apparently had a moderate age of marriage, selected their own partners, and, in many cases, did not marry at all. 
What this means is that the sociologists and economic historians who use England as the exemplar of a transition from a feudal, peasant society to a capitalist one are looking in the wrong time period! the transition took place long before the time they were examining, as Macfarlane explains:
…if we are correct in arguing that the English now have roughly the same family system as they had in about 1250, the arguments concerning kinship and marriage as a reflection of economic change become weaker. To have survived the Black Death, the Reformation, the Civil War, the move to the factories and the cities, the system must have been fairly durable and flexible. Indeed, it could be argued that it was its extreme individualism, the simplest form of molecular structure, which enabled it to survive and allowed society to change. Furthermore, if the family system pre-existed, rather than followed on industrialization, the causal link may have to be reversed, with industrialization as a consequence, rather than a cause, of the basic nature of the family. 
Macfarlane’s book did not answer the question as to why the English were so different from the rest of the continent (for additional criticism, see this [PDF]). However, beginning with Goody’s book, attention became focused on the efforts of the Catholic Church to break up kin groups in Anglo-Saxon England. This may have been where the practice began, as Henrich noted in a 2016 interview with Tyler Cowen:
When the church first began to spread its marriage-and-family program where it would dissolve all these complex kinship groups, it altered marriage. So it ended polygyny, it ended cousin marriage, which stopped the kind of . . . forced people to marry further away, which would build contacts between larger groups. That actually starts in 600 in Kent, Anglo-Saxon Kent.
Missionaries then spread out into Holland and northern France and places like that. At least in terms of timing, the marriage-and-family program gets its start in southern England.
This might explain why Anglo-Saxon culture is so manifestly different than other cultures, with its emphasis on individualism, hustling, shallow social ties, and “making your own way.” This was further cemented by the fact that England was conquered by a foreign people in 1066—the Normans—who inserted themselves as a new ruling strata above the local lords in the prevailing feudal system. As one of my readers pointed out, the Normans had contempt for those beneath them, so much so that they didn’t even bother to learn the local language of those they ruled over. The “Norman yoke” might be another ingredient in the origins of English attitudes toward individualism. As Brad DeLong put it, “The society of England becomes more unequal because William the Bastard from Normandy and his thugs with spears—300 families, plus their retainers—kill King Harold Godwinson, and declare that everyone in England owes him and his retainers 1/3 of their crop.” And besides, with such a hodgepodge of peoples—Normans entering an already multicultural society of Angles, Saxons, Jutes, Danes, various Celts, and so forth—it’s hard to see how a tribal society could have persisted without strict prohibitions against intermarriage in any event on one small island given the circumstances (for example, Japan has a similar lack of tribes, except for minorities like the Ainu people).
The feudal system, with its emphasis on contractual obligations, was itself a substitute for the tribal solidarity that by that time had already been eroded. Henry Maine argued that feudalism was an amalgamation of earlier tribal customs with imported Roman legal systems of voluntary contract:
Feudalism…was a compound of archaic barbarian usage with Roman law…A Fief was an organically complete brotherhood of associates whose proprietary and personal rights were inextricably blended together. It had much in common with an Indian Village Community and much in common with a Highland clan. But…the earliest feudal communities were neither bound together by mere sentiment nor recruited by a fiction. The tie which united them was Contract, and they obtained new associates by contracting with them...The lord had many of the characteristics of a patriarchal chieftain, but his prerogative was limited by a variety of settled customs traceable to the express conditions which had been agreed upon when the infeudation took place.
Hence flow the chief differences which forbid us to class the feudal societies with true archaic communities. They were much more durable and much more various…more durable, because express rules are less destructible than instinctive habits, and more various, because the contracts on which they were founded were adjusted to the minutest circumstances and wishes of the persons who surrendered or granted away their lands.
The medieval historian Mark Bloch also noted that feudalism was a substitute for earlier social ties which had been abandoned:
Yet to the individual, threatened by the numerous dangers bred by an atmosphere of violence, the kinship group did not seem to offer adequate protections, even in the first feudal age. In the form in which it then existed, it was too vague and variable in its outlines, too deeply undermined by the duality of descent by male and female lines. That is why men were obliged to seek or accept other ties. On this point history is decisive, for the only regions in which powerful agnatic groups survived–German lands on the shores of the North Sea, Celtic districts of the British Isles–knew nothing of vassalage, the fief and the manor. The tie of kinship was one of the essential elements of feudal society; its relative weakness explains why there was feudalism at all. 
I should note that medieval guilds were also a response to this need for security; some historians of guilds trace their ancestry back to frith gilds, which were brotherhoods explicitly established for protection and defense.
And so, a society governed by explicit contracts and legal institutions centered around individuals became the norm in Western Europe far before the rest of the world. In the patchwork quilt of post-Roman Europe, some areas escaped infeudation altogether and retained elements of older, more traditional social orders. It was these remote communities that were studied in the late nineteenth century in order to uncover the lost world of Europe’s past tribal organization (for example, in Laveleye’s Primitive Property ). In other parts of Europe, feudal contracts took a myriad of alternative forms as Maine noted above—so much so that medieval historians today dislike even using the term feudalism to describe the political arrangements of this time period, because the contracts themselves were so varied. They often note that what we call feudalism was hardly one monolithic system. But it does seem as though the specific arrangements of feudalism from country to country determined the subsequent and divergent paths that various Western European countries would take. In a paper entitled, “English Feudalism and the Origins of Capitalism,” political scientist George Comninel argues that the specifics of English feudalism allowed capitalism to develop there, rather than in neighboring France:
The specific historical basis for the development of capitalism in England- and not in France – is ultimately to be traced to the unique structure of English manorial lordship. It is the absence from English lordship of the seigneurie banale – the political form of parcellised sovereignty which was central to the development of Continental feudalism – that can be seen to account for the peculiarly ‘economic’ turn taken in the development of English class relations of surplus extraction. The juridical and economic social relations necessary for capitalism were forged in the crucible of a peculiarly English form of feudal class society.
In France, by contrast, the distinctly political tenor of social development – visible in the rise of the absolutist state, in the intensely political character of the social conflict of the Revolution, and as late as the massively bureaucratic Bonapartist state of the Second Empire – can be traced just as specifically to the centrality of seigneurie banale in the fundamental relations of feudalism.
The effects flowing from this initial basic difference in feudal relations include: the unique differentiation of freehold and customary tenures among English peasants, in contrast to the survival of allodial land alongside censive tenures of France; the unique development of English common law, rooted in the land, in contrast to the Continental revival of Roman law, based on trade; the unique commoner status of English manorial lords, in contrast to the Continental nobility; and, most dramatically, in the unique enclosure movement by which England ceased to be a peasant society – ceased even to have peasants – before the advent of industrial capitalism, in stark contrast with other European societies. 
I’ve banged on for too long already, so I’m just going to close with a few notes.
Unfortunately, many of the ideas I’ve written about above have been largely discussed in the context of white supremacy and racialism, and this research will give succor to those who believe that the “white race” is unique and therefore superior to all other people on earth.
I don’t think that’s the intent of the paper at all, although I am a little disturbed by the associations with George Mason University—the epicenter of the Koch Brothers’ takeover of a wide swath of economics. However, I’ll give them the benefit of the doubt for now.
While the racialist and HBD moments online are determined to reduce everything to genes (in a perverse inverse of blank slatism), it seems to me that these are cultural developments more than anything else, and are worth studying.
The desire to have such cultural differences rooted in biology is mainly an attempt by the Reactionary Right to justify the course of history and reify the status quo. For example: why is Africa poor? It’s not because they have been—and continue to be—exploited by Western colonial powers, it’s because they are stupid. The flip side to that is, or course, that Europeans are naturally smarter and more pro-social, and this is baked into the genes, meaning that reform is unnecessary and impossible—it’s just “the way things are.” The Just World philosophy on the level of nations. It also rationalizes why immigration—no matter how limited—is bad, without admitting to pure racism. Rather, it’s “just science,” claim the HBD crowd, that Europeans are different and superior at the genetic level, and therefore must remain “pure and undiluted” in order to maintain Western civilization.
But I doubt that there is any genetic basis here. Yes, institutional and cultural beliefs are very persistent, and these are indeed barriers to “Westernizing” the rest of the world. But to put all of this down to genes without evidence—where is the gene for “clannishness?”—is just not scientific, it’s political: exactly what they accuse the “radical Left” of engaging in.
Finally, I’ll just note that the places where kinship groups were broken up the earliest seem to have the highest rates of depression, suicide, and mental illness to this day, while those parts of the world that retained embedded human relationships—although significantly poorer—seem to be far happier and more content with life. It forces one to contemplate what the ultimate purpose of “progress” really is.
 Alan Macfarlane, The Origins of English Individualism: Some Surprises, pp. 270-271
 Quoted in Fukuyama, p. 236
 For example, Primitive Property, Chapter XV, p. 212:
Emile Souvestre, in his work on Finisterre, mentions the existence of agrarian communities in Brittany. He says it is not uncommon to find farms there, cultivated by several families associated together. He states that they live peacefully and prosperously, though there is no written agreement to define the shares and rights of associates. According to the account of the Abbé Delalandre, in the small islands of Hœdic and Houat, situated not far from Belle Isle, the inhabitants live in community. The soil is not divided into separate properties. All labour for the general interest, and live on the fruits of their collective industry. The curé is the head of the community; but in the case of important resolutions, he is assisted by a council composed of the twelve most respected of the older inhabitants. This system, if correctly described, presents one of the most archaic forms of primitive community.
I want to talk about this article that I found a while back on Cato Unbound called The Trouble in Getting to Denmark. Denmark is the example given by Francis Fukuyama as the ideal modern, peaceful Western Liberal democratic state. Inconveniently for the Cato Institute, it also has one of the most generous social safety nets in the world.
[Tangentially: Cato is all about promoting economic “freedom,” and Denmark is one of the freest and most entrepreneurial societies in the world. But it’s that way precisely because of its strong safety net and social democratic policies—policies that are being promoted by people like Bernie Sanders in the U.S. Also, see this: Never Trust the Cato Institute (Current Affairs)]
This post content centers around a new history book by Mark Koyama and Noel Johnson called Persecution and Toleration: The Long Road to Religious Freedom. The authors are both professors at George Mason University and are affiliated with the Mercatus Center, which on first blush might make them a little suspect. But there are some very good historical insights here, which are well worth a look. I’ll also quote extensively from this interview with Koyama by Patrick Wyman on the Tides of History podcast which covers the subject matter well. I’ve lightly altered some of the dialogue for clarity, quotes are from Koyama unless noted otherwise..
The book’s insights dovetail with what we’ve been talking about recently: the rise of the modern, liberal absolutist state. The thesis is that religious freedoms were basically the foundation for the rise of capital-L Liberalism—Liberalism being the idea of society as an assorted collection of solitary, self-directed individuals who must be free from any sort of predetermined social identity. Because this notion of has become the hegemonic assumption of the modern world, we fail to recognize just how novel it really is. So let’s dive in…
The main thesis is succinctly stated by Patrick Wyman near the beginning of the podcast:
“The rise of modern states, which were capable of enforcing general rules throughout their territory–down to the local level–were the precondition for religious peace and the eventual rise of religious and other freedoms, which we can term more broadly Liberal freedoms.”
Medieval European society gets the closest look, because it is out of these societies that the modern Liberal state develops, but many of the concepts and insights are applicable to other societies as well.
Religious Freedom versus religious tolerance
The book makes a very important point: religious freedom and religious tolerance are not the same thing; they are actually quite different. Most modern nation-states have true religious freedom, and most are founded on a secular basis (to the consternation of religious fundamentalists). Ancient states, however, practiced a form of religious tolerance, which was the toleration of minority religious beliefs, the same way you might tolerate your neighbor’s loud music instead of going over and starting a fight, or tolerate a screaming baby on a flight:
[3:18] Mark Koyama: “We attempt to project backwards our modern notions of what religious freedom is. In our modern language, we often use toleration interchangeably with religious freedom, where we describe toleration as an attitudinal thing–like ‘I’m a tolerant person; I don’t care what religion you have,’ as opposed to its original meaning, which was ‘to bear.’ This was a sufferance. We’re going to allow these Muslims, say, to practice their religion, but it’s not because we’re okay with it. It’s because it’s the best expedient or pragmatic response to religious diversity.”
[10:51] Patrick Wyman (host): “There’s a fundamental difference between religious sufferance and freedom. Between suffering something to happen because it’s necessary for you to run your state the way you want to, and actively embracing this thing as a legally-based ideal.”
I think that’s an important point. Ancient multi-ethnic states did not have true religious freedom. You will often find this asserted in various history books, but this is a misunderstanding. They had religious tolerance; that is, they permitted subcommunities to openly practice their religion. It was a sufferance, but they allowed it because it was better than the alternative.
This was a categorically different concept from religious freedom as we think about it today.
One example is the Roman Empire. All the Romans really wanted was to gain the spoils of their vast empire via tax collection and tribute. They often co-opted local rulers and other notables, who subsequently became “Romanized,” but they weren’t out to transform society. To that end, subjugated ethnic groups were allowed to maintain their cultural and religious practices, with a few stipulations. For most religions, this wasn’t a problem—they were flexible enough that they could accommodate some Roman gods in their practices and be more-or-less okay with it. The Jews, on the other hand, with their strident and uncompromising monotheism, were different. They regarded their God as the real one, and all others as idols, and worshiping idols was strictly forbidden. This is why there was so much tension in Judea, tension that ultimately led to several revolts and wars.
This was a time where religious identity was not separate from cultural or ethnic identity. The rise of doctrinal evangelical religions changed all that. You can be an Arab, a Turk, a Persian, or Balinese and also be a Muslim. You can be Irish, Polish, French, Italian, or Nigerian and be a Catholic. That’s a much more modern-day conception of religion—as a creed freely chosen. But in ancient societies, religion was an essential and inseparable part of shared cultural identity.
In our reading of the historical evidence, neither ancient Rome nor the Islamic or Mongol Empires had religious freedom. They often refrained from actively persecuting religious minorities, but they were also ruthless in suppressing dissent when it suited their political goals. Religious freedom is a uniquely liberal achievement, and liberalism is an achievement of post-1700 modernity. What explains it?
Which raises the second major point of the book.
Identity Rules versus General Rules
For me, the biggest takeaway was the difference between identity rules and general rules.
[6:25] “An identity rule is where the content or enforcement of the law depends on the social identity of the individual involved. In contrast, a general rule is a rule where the content or enforcement of the law is independent of that individual’s relevant social identity…The identity rules could privilege a minority, or it could disadvantage them. They key here is that your social identity is determinative.“
They actually distinguish three different types of rules: personal rules, identity rules and general rules. Personal rules are targeted to the specific person who commuted the infraction, and are largely ad-hoc. This works well on the local level, where everybody knows everybody else such as a small self-governing village, but it doesn’t scale up.
When large empires came on the scene, they imposed identity rules, where law enforcement was based largely on one’s group identity. The reason they did this is because ancient states had limited capacity to govern at a local level i.e. low state capacity. The sophisticated legal systems we have today—with their courts, police, bailiffs, jails, attorneys and professional judges—simply didn’t exist. The capacity simply wasn’t there. Plus, the very notion of an individual as having an identity wholly separate and unmoored from the larger group to which he or she belonged was much less common in the ancient world than in our modern one. That is, ancient societies were collectivist by default. And so, rules were based on one’s ascribed group identity: one’s clan affiliation, social status, guild, corporate group, religion, etc.
With the shift to settled agriculture after 8,000 BC, political organizations became larger and states oversaw the introduction of more sophisticated legal systems to prevent theft, fraud, and uncontrolled violence. For most of history, and in much of the developing world today, these laws have taken the form of identity rules.
Identity rules depend on the social identity of the parties involved. This could refer to an individual’s clan, caste, class, religious affiliation, or ethnicity. Examples from historical legal systems abound. Aristocrats faced different rules from commoners. Slaves faced different rules from freemen. The Code of Hammurabi, for example, prescribed punishment based on the relative status of the perpetrator and the victim. Identity rules were common historically because governing individuals on the basis of their legible social characteristics was cheap. As religious identity was particularly salient, many identity rules treated individuals differently on the basis of their religion.
This is something I’ve repeatedly tried to emphasize in my writing: when we talk about “states,” or things like “the rise of the state” in ancient history, we’re talking about something qualitatively different than when we use term “state” today. That’s important to keep in mind.
[9:24] “The nature of pre-modern states is that, because of they way they govern, they have to rely on identity rules. They don’t have the ability or the capacity to govern at a very local level. They can’t extend their reach deep into society. So they’re more likely to say to this community: ‘we’re going to delegate to you a lot of authority; a lot of power.’ Even if they wanted to enforce a general rule, they wouldn’t be able to.”
“To take another pre-modern example, if you look at the Ottoman state throughout its history, it’s seen as an absolutist state where the Sultan has all the power. But it’s such a vast empire that, given how primitive communication technologies are, its inevitably decentralized, and power is delegated to local nobles. And that means that religious minorities like Christians and Jews get quite a lot of autonomy; a lot of independence, because the state just can’t govern them directly.”
“So the local religious leaders will get quite a lot of autonomy, and a lot of ‘freedom’ precisely because the state governs through identity rules, not through general rules. This results in a lot of self governance for religious minorities. But the key point is that that religious self-governance should not be mistaken for religious freedom. Nor should a state like the Ottoman state, which delegated power and gives autonomy to religious communities, be mistaken for a liberal state. That shouldn’t be mistaken for an example of religious freedom or liberalism.”
The rulers of ancient states relied primarily on religion to legitimize their rule. This seems to stem back very far, indeed. A careful reading of, for example, The Creation of Inequality by Flannery and Marcus, leads to the conclusion that all of the earliest ruling classes everywhere claimed some sort of special connection to the divine entities that were the object of collective reverence. Sometimes this was the “King as a god” model of ancient Egypt. Sometimes this was the “Ruler as steward” model, as in ancient Mesopotamia. Sometimes it was “sacral kingship,” with the ruler as high priest. Sometimes it was tribal elders or scribes who “interpreted God’s will”. Much later, it was the “Divine Right of Kings.” But religion seems to have played a role in virtually all cases that we know of.
If identity rules were a “cheap” form of enumerating and enforcing laws in low-tech, multi-ethnic societies, then appealing to religion was a “cheap” way for rulers to claim legitimacy in these types of societies. It was also crucial to the creation of coherent group identities, which were necessary for identity rules to function. Often it involved special treatment for clergymen, or some sort of power-sharing accommodation with religious officials. But that also led to fairly weak states, with little power to expand the rulers’ prerogatives.
Religion was so central to premodern societies that it is difficult to fully understand the transformations associated with modernity without attending to it. Religion was used to justify the categories in which government and society more broadly used to structure everyday life. Women versus men, nobles versus commoners, guild members versus non-guild members, Muslims versus Christians, Christians versus Jews. All of these categories—as well as the different statuses associated with them in law and in culture—relied to a varying degree on religion to legitimize their use.
Religion was an especially important component of identity in the large agrarian civilizations of Europe and the Near East in a time before nationalism and nation states. Shared religious beliefs and religious identities were seen as crucial to maintaining social order. Religious differences were extremely destabilizing because they were associated with a host of deep societal cleavages.
In an environment where a common religious identity undergirded not only the institutions of the church, but also those of the state and civil society, both religious freedom specifically, and liberalism more generally, were unthinkable.
For instance, in medieval and early modern Europe oaths sworn before God played an important role in upholding the social order. These were thought so important that atheists were seen as outside the political community, since as John Locke put it, “promises, covenants, and oaths, which are the bonds of human society, can have no hold upon an atheist.”
A shared religious identity was also crucial for guild membership. Guilds in Christian Spain excluded Muslims. Guilds in 14th century Tallinn excluded Orthodox Christians. Jews were excluded almost everywhere. In parts of Europe converts from Judaism and even their descendants or remote relations could not be guild members. In a world governed by identity rules, an individual’s religious identity determined what economic activities were open to them.
Identity rules were even relied upon by rulers to raise revenue. For example, in many ancient empires, taxes were collected at the village level, with the collection delegated to local elders. Taxes might be assessed differently depending on the group in question. Merchants might be taxed differently than farmers, for example, and often times nobles weren’t taxed at all! Different ethnic groups might face different levels of responsibility and taxation. For example, Jews were the only group allowed to lend money at interest in Catholic Europe, so they were frequently used as cash cows by Christian rulers:
As an illustration, consider how early modern governments often used Jewish communities as a source of tax revenue. Usury restrictions made lending by Christians very costly. However, rulers could grant monopoly rights to Jews to lend without violating their religious principles. In turn, the rates of interest charged by Jewish lenders were high, and the profits were taxed away by the very rulers who granted these rights. Finally, the specialization of Jews as moneylenders exacerbated preexisting antisemitism among the Christian population. This in turn made it relatively easy for rulers to threaten Jews in case they didn’t intend to pay up.
So long as rulers relied on Jewish moneylending as a source of revenue, Jews were trapped in this vulnerable situation. Their position could improve only when states developed more sophisticated systems of taxation and credit.
As suggested by the above example, low state capacity and a reliance on identity rules are self-reinforcing. States that rely on identity rules face less incentive to invest in the fiscal and legal institutions that would increase state capacity. This, in turn, makes them more reliant on identity rules and less able to enforce general rules.
Low state capacity, identity rules and religious legitimization all combined and interacted with each other to form a self-reinforcing social equilibrium, argue Koyama and Johnson.
What is a self-reinforcing equilibrium? This is a tricky one. It’s a concept developed by a Stanford political scientist named Avner Greif. He distinguishes between “institutions as rules” and “institutions as equilibria”. The following is my interpretation, such as I can make out:
Rules as institutions is just what it says—it looks at what the rules of the game are, and how they developed over time. Rules are prescriptive, and are set and enforced from above. They change very slowly.
Rules as equilibria is a concept developed from game theory. In this conception, rules are an emergent phenomena from consistent, repeated interactions between groups of people. There is no overall enforcer, rather the rules develop through “playing the game” over and over again. Consequently, rules as equilibria are more likely to develop out of repeated voluntary interactions between groups rather than individuals, and are enforced by intra-group norms rather than an all-powerful “referee” overseeing everything. The rules of the game are not static; they develop as time goes on. This approach emphasizes the incentives and motivations of the groups which are interacting.
In the institutions-as-rules approach, rules are institutions and institutions are rules. Rules prescribe behavior. In the institutions-as-equilibria approach, the role of “rules”, like that of other social constructs, is to coordinate behavior. The core idea in the institutions-as-equilibria approach is that it is ultimately the behavior and the expected behavior of others rather than prescriptive rules of behavior that induce people to behave (or not to behave) in a particular way. The aggregated expected behavior of all the individuals in society, which is beyond any one individual’s control, constitutes and creates a structure that influences each individual’s behavior. A social situation is ‘institutionalized’ when this structure motivates each individual to follow a regularity of behavior in that social situation and to act in a manner contributing to the perpetuation of that structure.
An example he gives is the merchant guilds of the Middle Ages:
For example, at the medieval Champagne Fairs, large numbers of merchants from all over Europe congregated to trade. Merchants from different localities entered into contracts, including contracts for future delivery, that required enforcement over time. There was no state to enforce these contracts, and the large number of merchants as well as their geographic dispersion made an informal reputation mechanism infeasible…impersonal exchange was supported by a “community responsibility system”. Traders were not atomized individuals, but belonged to pre-existing communities with distinct identities and strong internal governance mechanisms.
Although particular traders from each community may have dealt with merchants from another community only infrequently, each community contained many merchants, so there was an ongoing trading relationship between the communities, taken as a whole. Merchants from different communities were able to trust each other, even in one-shot transactions, by leveraging the inter-community “trust” which sustained these interactions. If a member of one community cheated someone from another community, the community as a whole was punished for the transgression, and the community could then use its own internal enforcement institutions to punish the individual who had cheated.
This system was self-enforcing. Traders had an incentive to learn about the community identities of their trading partners, and to establish their own identities so that they could be trusted. The communities had an incentive to protect the rights of foreign traders, and to punish their members for cheating outsiders, so as to safeguard the valuable inter-community trade. Communities also developed formal institutions to supplement the informal reputation mechanism and coordinate expectations. For example, each community established organizations that enabled members of other communities to verify the identity of its members.
Ultimately, the growth of trade that this institution enabled created the impetus for its eventual replacement by more formal public-order (state-based) institutions which could directly punish traders by, for example, jailing them or seizing their property.
Thus, we see the importance of group identity and solidarity in establishing and enforcing social norms in a world where centralized institutions (e.g. states) are very weak. Without a powerful state, there is simply no way to enforce norms out of a group of isolated, atomized individuals whose identity is completely self-chosen. But membership in various sodalities makes it possible. If you were a bad merchant who cheated or welshed on your debts, you wouldn’t be a merchant very long, even without an all-powerful state enforcing contracts from above. Your reputation, and your relationship with the group, was paramount.
The authors also make a distinction made between equilibria which are stable, and equibria which are self-undermining.
[11:06] PW: “You talk a lot about political legitimacy, about what allows rulers to rule without the constant threat of political violence, of coercive violence. And so you get at the concept of self-reinforcing equilibrium—that this is how medieval society functioned. In your conception, you have religious legitimacy—legitimacy given to a ruler by religious authorities—and identity rules, working together to generate a kind of political equilibrium.”
MK: “In the Middle Ages we see widespread reliance on identity rules. Why? Well, for one reason is that even if a ruler was ambitious and had read Roman law and envisioned ruling on the basis of laws which were more general, less parochial, and less local, they wouldn’t have the ability to really enforce them. Ambitious medieval rulers lacked bureaucracies and standing armies, so they would be unable to overturn these rules and replace them with more general rules. So that’s one self-reinforcing relationship—the relationship between low state capacity and reliance on identity rules.”
“The other aspect is the reliance on religion as a source of legitimacy. One reason why religion is valuable is because medieval rulers didn’t provide much in the way of public goods, beyond maybe defense; but even defense is questionable because often defense is actually offense. So they’re not providing education, they’re not providing welfare—that’s done by the Church. They’re not really regulating markets. They’re not doing much to alleviate famine or harvest failures. Where does their legitimacy come from, then?”
“It’s because they’re the ‘Most Christian King,’ or the’ Catholic Monarch,’ or the ‘Defender of the Faith.’ Religion is a cheap way for rulers to get legitimacy. But if you’re using religion to get legitimacy, you’re making a deal with the religious authorities.”
“So in the case of medieval Europe, you’re making a deal with the Church. What the deal entails might be things like: making Churchmen exempt from certain laws, or exempt from paying taxes, which was common in the medieval period. It might involve allowing the papacy to choose popes, or giving them political offices.”
“If you have low state capacity, religious legitimization is going to be an appealing strategy. But at the same time, the more you rely on religion or religious authorities to legitimate your rule, that’s going to curtail your power, your discretionary authority to build state capacity. So its’ a self-reinforcing relationship.“
And so low state capacity, religious legitimization, and the application of identity rules, were all linked together in maintaining a stable equilibrium. Eventually, though, that equilibrium was disrupted.
Disrupting the equilibrium: The Reformation and the printing press
The Gutenberg printing press, expanding literacy, and the Protestant Reformation were all intimately connected, and provide a potent example of how technological change often drives social change, for better or for worse (a point worth attending to today).
Suddenly you have many more religious minorities, disrupting the old stable equilibrium. Perhaps even more significantly, you have religious minorities that are allied across national boundaries. This is something that did not really exist before.
[23:00] “John Calvin and Martin Luther didn’t want to secularize society or the state—anything but. They wanted to revitalize religion on different foundations. But the net result was something very different than what they intended…”
“Large chunks of society that were once the concern of the Church are no longer the concern of the Church, at least in the Protestant territories. For example, in England the monasteries are sold off, and a lot of Church land is privatized, so a lot of functions that the Church was doing—like providing welfare to the poor–are no longer being provided in sixteenth-century England. That generates a crisis of beggars and paupers in Elizabethan England which the state eventually has to solve with the introduction of the Poor Law in the early seventeenth century.”
“In the German territories, it’s been shown by research that Protestantism leads to the selling off of Church buildings. Even in Catholic Europe, the Counter-Reformation is tightly controlled by powerful monarchies in Spain and France. And so the independent ability of the Church is weakened as a result. Similarly, the ability of identity rules and religious identity to effectively govern society is weakened where you have multiple religions in one society.”
“So all of these societies which experience the Reformation wholeheartedly—France, the German territories, England—they generate religious minorities that they didn’t have before.”
“This is an ongoing problem. In England, the wars of religion destabilize the political economy for the entire period between Henry the Eighth and the Glorious Revolution. You’re always worrying whether the Catholics will somehow take control, or will turn England toward Rome. That generates the persecution of Catholics, and it generates conflict between Parliament and the King.”
“Germany is the most extreme example, because the Holy Roman Empire descends into a terrible war—the Thirty Years’ War—which is one of the worst wars in European history.”
“Throughout this period of crisis, which lasts more than a century, European rulers want to return their societies to how they had been in the medieval period. They want to regain religious homogeneity, so they think they can reconcile the Protestants and the Catholics. It’s a common view in sixteenth-century France that if the king can bring everyone together, there will be a way to bring the Protestants back into the fold. We also have the policy of expulsion which is used not only in Spain and Portugal, but also in France at the end of the seventeenth century. You feel you can’t govern effectively so long as you have a group of people who belong to another religion, so you expel them.”
“Because rulers are conditioned on this prior equilibrium, they don’t know how to deal with religious differences. And it takes basically a century-and-a-half of conflict, violence, and then accommodation before there’s a movement to reorient these societies along different rules. There’s what we recognize as a shift in political arrangements which de-emphasizes religion as a source of political legitimacy and shifts away from this reliance on identity rules towards more general rules. And, of course, this transition takes several centuries.”
They then discuss a concept called multivocal signaling. In an era of low information flow and primitive communication technologies, rulers could target alternative messages to different groups of subjects. Each message was tailored to that particular social group, and was designed to appease them and keep them in the fold. The rulers’ identity became a Rorschach ink blot designed to be interpreted many different ways by many different groups of people.
But once information became easier to disseminate and access, different groups could compare notes. Now it was no longer possible to be all things to all people; sort of like when a cheating man’s wife discovers that he has one or more secret other families. This concept is based on a book called The Struggle for Power in Early Modern Europe by political scientist Daniel H. Nexon:
[27:15] PW: “In the early modern period, especially with the rise of print and then the Reformation that follows, it gets a lot harder for rulers to be everything to all of their different groups of subjects–what Nexon terms multivocal signalling. Premodern rulers had done a lot of being one thing to one group of people in their kingdom, another thing to another group of people. So you could simultaneously be ‘Protector of the Jews’ and ‘Most Christian King,’ and this to the artisans, and this to the nobles. A ruler could be a lot of different things simultaneously because it was easy to target messages to those groups in the absence of mass media of communication.”
“But when you get the rise of print and simultaneously the splintering of society along religious lines, it gets a lot harder to be everything to everybody, because everybody knows what you’re saying to everybody else, too. So it becomes much harder for rulers to maintain these split identities that allow them to govern heterogeneous societies effectively by means of these identity rules.”
“Maybe that’s a thing that helps explain the shift to general rules. When you can’t be everything to everybody, you need to find different bases of legitimacy and power on which to rule.”
[28:33] MK: “…When we think about why religious persecution was so acute during that period—why do you have these wars of religion—the kind of trite, high school history view is how intolerant people were back then. Then we can look down on them from our modern liberal societies and say that people in the sixteenth century really believed in burning heretics alive, or killing people for religious differences.
But Daniel Nexon’s book really points out that because of the spread of print media, this religious crisis was really a geopolitical crisis, because Catholics in France and Spain were now interested in the fate of Catholics in England. So the Catholics in England then become a potential fifth column in the geopolitical struggle taking place for non-religious political reasons between England, France, and Spain. They’re aligned with the political interests of a foreign power. Ditto Protestants in France. Protestants in France are going to be aligned with the Dutch Republic, or with the German States or with England. So, again, a potential fifth column that the state no longer can trust.
Prior to the Reformation, there were religious differences across these European states. People would have their own local version of Catholicism. They would worship local saints and have local practices. But those local religious differences were not correlated in any way with political differences at the geopolitical level. The fact that you might have your own religious practices in Norfolk was not going to align you with the French. But by the seventeenth century, that is true for Catholics and Protestant minorities in their respective countries. So that’s another layer of this crisis that early modern rulers faced.
Nexon himself describes multivocal signaling this way:
Multivocal signaling enables central authorities to engage in divide-and-rule tactics without permanently alienating other political sites and thus eroding the continued viability or such strategies. To the “extent that local social relations and the demands of standardizing authorities contradict each other, polyvalent [or multivocal] performance becomes a valuable means of mediating between them” since actions can be “coded differently within the audiences.” Multivocal signaling, therefore, can allow central rulers to derive the divide-and-rule benefits of star-shaped political systems while avoiding the costs stemming from endemic cross pressures… The spread of reformation, in particular, made it difficult for dynasts to engage in polyvalent signaling across religiously differentiated audiences…
The Struggle for Power in Early Modern Europe: Religious Conflict, Dynastic Empires, and International Change; by Daniel H. Nexon, pp. 114-115
This also helps explain the emergence of nationalism and national identities in nineteenth-century Europe, and the demise of multi-ethnic states like the Austro-Hungarian empire. As the hand of the state reached ever deeper down into the underlying fabric of society during this period, people wanted to be directly ruled by people “like them” and not by “outsiders.” Ancient states, by contrast, did fairly little besides collecting taxes, guaranteeing safe travel, and keeping basic order, with underlying ethnic identities remaining mostly intact.
The Roman Empire, again, provides an example. You can’t look at a map of the Roman Empire at its height without pondering, “how could they govern such a vast territory without any modern technology?” The answer is: they didn’t! The empire was sort of a “stratum” above local communities whose day-to-day lives probably differed very little from those of their remote ancestors. The empire just provided an organizational framework, and little else. Even a standing army could only move as fast as a soldier could march, and communicate as fast as a horseman could ride. Rulers moved the army about strategically, like pieces on chessboard, in order to maintain order and quell revolt. Actual interaction with government officials, however, was limited to a small coterie of aristocratic local leaders. For most ordinary people in the ancient world, the “empire” they were nominally ruled by was just a remote abstraction. With the rise of strong, centralized states, that was no longer the case. Even today separatist conflicts abound, such as in Catalonia or Kurdistan.
The Emergence of general rules and modern Liberalism
And so we finally come to the introduction of general rules—rules that are written to treat everybody equally, regardless of their group identity, doctrinal creed, or any other ascribed social status. Whether you were Protestant, Catholic or Jew (or even atheist!), the law was the same. Of course, this was an ideal often not lived up to, but it started to become the common expectation. This eventually came about after every other approach was tried by Early Modern rulers and failed. It’s hard to win a win a war against a belief system. But what this approach also did was free up Early Modern rulers to expand state capacity in other ways that they could not have done before, and appeasing religious officials was no longer paramount. For example, Napoleon considered his law code to be his finest and most durable achievement, surpassing even his military victories. All sort of archaic and feudal rules were swept away.
Yet there were often many attempts from below to push back against this kind of governance, and hence there were significant roadblocks on the way to more modern systems of professional, bureaucratic governance, democracy, and the expansion of state capacity:
[31:15] We see endless attempts by Early Modern rulers to build state capacity, and they’re always being undermined at the local level…Every attempt by these Early Modern rulers to build state capacity is one step forwards, two steps backwards. There are these forces pushing back against any attempt to build a society based on general rules—what Francis Fukuyama calls the repatrimonilization of the state—and often it’s only in war that these modern states are forged. War is driving this increase in state capacity, but war is also destroying the economy and using up the lives of hundreds of thousands of individuals. That’s why its such an arduous process.
Some of these Early Modern rulers are heading towards more general rules and increased state capacity, others think the way forwards is actually backwards. The term historians use is confessionalization, and in some sense these confessional states that are built in the Early Modern period are trying to rebuild the medieval equilibrium. I think Louis the Fourteenth, what he’s doing when he expels the Huguenots—the French Protestants—is looking back to the golden age of how France was before the Reformation. He thinks if only he could get back and reunify the country religiously, that would actually strengthen his power and make the state stronger.
We know after the event that that’s a failure. It doesn’t strengthen the French economy or society, because they lose a very productive minority, but it also doesn’t work even on its own terms, because by the eighteenth century there are still many, many Protestants in France. It doesn’t get rid of the problem of a religious minority.
European rulers eventually had no other choice but to acquiesce to the freedom of religion as we now know it. Edicts of Toleration were signed all over Europe. The Founding Fathers of the United States—for whom the wars of religion were still recent history—recognized this and enshrined it in the Constitution. Its birth was much more painful in Europe, beginning with the often radical atheism of the leaders of the French Revolution. This kicked off the Long nineteenth century—the period of conflict where modern Liberalism was born.
With religious affiliation now being something “freely chosen” according to one’s own individual conscience, other forms of ascribed identity soon fell by the wayside. Free cities and communes had always been places for nonconformists in Medieval Europe to flee to in order to escape the stultifying conformity of the countryside and shed their traditional social obligations. These sophisticated, cosmopolitan urbanites—the bourgeoisie—became the nucleus of the new social order based around “freely chosen” social affiliations, flexible and ever-shifting personal identities, and explicit (as opposed to implicit) contractual obligations:
In our argument it was not that the Wars of Religion simply exhausted confessional and doctrinal disputes. Rather there was a transformation at the institutional level. The leading European states shifted away from identity rules towards more general rules. This shift was related to 19th-century historian Henry Sumner Maine’s discussion of the passage from status to contract: Status was imposed and ascriptive. Contracts, in contrast, are the outcome of voluntary choices. Status-based rules are invariably identity rules. Contracts provide the foundation for a system of general rules.
Moving from a fixed status to a contractual society helped set in motion a range of developments, including the growth of markets and a more extensive division of labor. But it had the unintended consequence of diminishing the political importance of religion, and this made liberalism feasible for the first time in history.
Wars played a major role in the emergence of modern states, particularly the need to raise ever-larger amounts of money to fund them. In our history of money, we saw how international merchants’ use of paper instruments of credit, such as bills of exchange, existed alongside the ruler’s legal authority to raise taxes and coin money. Bills of exchange and trade credit allowed these merchants to coordinate their activities across international boundaries. This was enforced not by the state, but by private networks of merchant-bankers (i.e. via rules of equilibrium). When the bankers’ ability to issue paper credit became conjoined with the state’s ability to levy taxes with establishment of the Bank of England, you had a major step forward toward the creation of the modern welfare-warfare state. The end of the Thirty Year’s War in the Peace of Westphalia led to the concept of what political historians refer to as Westphalian sovereignty—the basis of the soveriegn, absolutist nation-state. These developments, in turn, led to the establishment of a professional Weberian civil service, supplanting the patrimonial states governed by hereditary aristocrats, i.e “depatrimonialization”. Per Wikipedia:
[Max] Weber listed several preconditions for the emergence of bureaucracy, including an increase in the amount of space and population being administered, an increase in the complexity of the administrative tasks being carried out, and the existence of a monetary economy requiring a more efficient administrative system. Development of communication and transportation technologies make more efficient administration possible, and democratization and rationalization of culture results in demands for equal treatment.
As Karl Polanyi extensively documented, strong states, capable of enforcing general rules and contracts, and haute finance, were the key requirements in creation of Market Society. Market Society—where everything including land and labor was for sale and theoretically allocated according to impersonal forces of supply and demand—was not merely an expansion of the kind of activities that had gone on generations prior. Rather, it was something altogether new and radically different, and done with the full blessing of the elite ruling classes. Patrick Deneen notes the connection in his book, Why Liberalism Failed:
Individualism and statism advance together, always mutually supportive, and always at the expense of lived and vital relations that stand in contrast to both the starkness of the autonomous individual and the abstraction of our membership in the state. In distinct but related ways, the right and left cooperate in the expansion of both statism and individualism, although from different perspectives, using different means, and claiming different agendas. This deeper cooperation helps to explain how it has happened that contemporary liberal states–whether in Europe or America–have become simultaneously more statist, with ever more powers and authority vested in central authority, and more individualistic, with people becoming less associated and involved with such mediating institutions as voluntary associations, political parties, churches, communities, and even family. For both “liberals” and “conservatives,” the state becomes the main driver of individualism, while individualism becomes the main source of expanding power and authority of the state. p. 46
Our main political choices come down to which depersonalized mechanism will purportedly advance our freedom and security–the space of the market, which collects our billions upon billions of choices to provide for our wants and needs without demanding from us any specific thought or intention about the wants and needs of others, or the liberal state, which establishes depersonalized procedures and mechanisms for the wants and needs of others that remain insufficiently addressed by the market.
Thus the insistent demand that we choose between protection of individual liberty and expansion of state activity masks the true relation between the state and market: that they grow constantly and necessarily together. Statism enables individualism, individualism demands statism. For all the claims about electoral transformations–for “Hope and Change,” or “Making America Great Again”–two facts are naggingly apparent: modern liberalism proceeds by making us both more individualist and more statist. This is not because one party advances individualism without cutting back on statism while the other does the opposite; rather, both move simultaneously in tune with our deepest philosophic premises. p. 17
The authors display their Libertarian biases toward the end of the article with this line: “While the far left has never accepted liberal values such as freedom of expression and freedom of religion, antipathy towards liberal values is now evident in mainstream progressive publications as well. Liberalism is indicted because it is perceived as legitimating inequality and failing to endorse social justice.” Notice the lack of citations here.
A nice strawman, but liberalism is not indicted, capitalism is. Capitalism is inherently undemocratic, since it invests disproportionate power in an unelected minority capitalist class, whose power stems from paper ownership claims (in deeds, stocks bonds, and accounts) which can be passed down in perpetuity. As Deneen notes, in practice this simply replaces one aristocracy with another. And we all know that the rich can buy special treatment under the law due to their disproportionate wealth and influence in comparison with the rest of us, something which makes a mockery of so-called “liberal values.”
Also, under Neoliberalism repatrimonialization and rent-seeking have exploded. Monopolies and oligopolies control practically every major industry. The feckless rich are bailed out while ordinary citizens are left to their own devices. Prices have less to do with actual production costs than sheer market power, and rules are written and re-written by the industries themselves in order to privilege existing actors and keep out competitors (including governments themselves). Parasitic financial gambling has become the highest-return activity rather than providing useful goods and services. Incompetent cronies and family members take over key positions in the public and private sector. The upper class uses elite universities as a moat to maintain their elevated status, despite their demonstrated lack of judgement or competence.
Capitalism as it currently stands also commonly makes rules that favor certain groups over others. Professional classes like doctors, lawyers, engineers, and so forth, are shielded from international competition by government restrictions. Patent and copyright laws enforced by strong states prevent the copying of innovations by others, and preserve existing wealth distribution. Wealth is taxed more lightly than wages. Meanwhile, most average workers are left to “sink or swim” in a harsh, competitive globalized job market with no protections whatsoever. This is all rationalized as an “inevitable” force of nature. Dean Baker has written a whole book about it called Rigged:
In the end, the authors conclude, “[W]e think the core characteristics of a liberal society are the rule of law and reliance on general rules,” and, “Liberalism is valuable because it is the only form of social order we know of that is consistent with a high degree of autonomy and human dignity.”
Well, under that definition, socialism would fit the bill just as well, if not better. It’s hard to see a lot of “dignity” and “autonomy” with the amount of people struggling in modern-day America. It’s hard to equate the millions of prisoners in jail toiling away for pennies an hour with “dignity.” And it hard to have “autonomy” when the base condition of existence for most of us is having to constantly sell our labor or face utter ruin. Liberalism is—or should be—more than simply allowing the rich the “freedom” to make whatever rules they wish for their own benefit, to the detriment of society as a whole. If that doesn’t happen soon, then don’t expect Liberalism to last much longer.
I’ve not had much time to write – I’m sprucing up Hipcrime Vocab international headquarters in case I want to sell it and relocate. I’ll say more about that another time.
This article is particularly interesting given what we talked about last time concerning the Iroquois culture. It’s about a study of a Bronze Age farming settlement in Europe (modern-day Augsburg) and concludes, “Social Stratification Dates Back to Bronze Age Societies.” The societies studied by the researchers were:
. . .members of Central European farming communities that spanned from the late Neolithic period through the Bronze Age—or from around 2800 B.C. through 1300 B.C.
So I”m guessing this is the Corded Ware, Bell Beaker and Funnelbeaker cultures in particular (or similar cultures). Most likely they spoke an Indo-European language and may have been proto-Celtic.
. . . it has long been assumed that prior to the Athenian and Roman empires,—which arose nearly 2,500 and more than 2,000 years ago, respectively—human social structure was relatively straightforward: you had those who were in power and those who were not.
A study published Thursday in Science suggests it was not that simple. As far back as 4,000 years ago, at the beginning of the Bronze Age. . .human families of varying status levels had quite intimate relationships. Elites lived together with those of lower social classes and women who migrated in from outside communities. It appears early human societies operated in a complex, class-based system that propagated through generations.
I’m not sure it was ever assumed to be that simple, but whatever. The interesting thing here is what it says about the creation of inequality. What we see here is a household structure, with various individuals ranked within it. People of different status lived cheek-to-jowl, and this is revealed by the burials:
Related individuals, the study’s authors found, were laid to rest with goods and belongings that appeared to be passed down through generations. The unrelated people in the household were buried with nothing, suggesting they were a lower class of “family members,” who were not given the ceremonial treatment.
“We don’t know if the low-status individuals in Augsburg were slaves, menial staff or something else,” comments Philipp Stockhammer of the Ludwig Maximilian University of Munich, who was a co-author of the new study. “But we can see that in every household, individuals of very different status were living together.”
So, then, it’s quite likely that inequality first appeared within households, before it became institutionalized more broadly. Second, my guess is that certain lineages became ranked lineages, with some having a claim to a more ancient or revered ancestor, for instance. When you combine these two factors, you get a two-pronged stratification giving rise to inequality: one interfamilial—between different Houses, and one intrafamilial—between different individuals within the House. The highest-up individuals of the highest-up Houses were probably the most important decision-makers (chiefs or kings). However, without a permanent standing army or police, like the Iroquois, there was no way for potential leaders to impose their will on the rest of the tribe.
It’s quite possible that this was a sort of feudal-style order based around cattle ownership. In his Lectures on the History of Early Institutions, Henry Maine considered whether the feudal system as it developed in post-Roman Europe grew out of the land-tenure laws of the Celtic and Germanic tribal cultures that occupied the continent.
Under this system, the lands of a tribe (fine) were not owned outright by any single individual, although the chiefs (flaiths) may have possessed small portions of their own land. The chiefs did manage the land, however, giving them a considerable degree of control over the grazing herds. They loaned out portions of the herd to other tribe members, a practice called giving stock. The receivers of stock became vassals (céiles) of the chief, with certain obligations, including military duties. The amount of stock received from the chief determined one’s social status. Those who owed a little stock were Saer (free) tenants; those with more loans were Daer (base) tenants. The Daer tenants had the more onerous obligations. There were also freemen with no property, and an unfree servile class, with differing degrees of legal protection (Bothachs, Sen-Cleithes, and fuidhirs), with fuidhirs also subdivided into Saer and Daer. These folks had no clan affiliation, and were tantamount to slaves:
Every considerable tribe, and almost every smaller body of men contained in it, is under a Chief, whether he be one of the many tribal rulers whom the Irish records call Kings, or whether he be one of those heads of joint-families whom the Anglo-Irish lawyers at a later date called the Capita Cognationum. But he is not owner of the tribal land. His own land he may have…and over the general tribal land he has a general administrative authority…and, probably in that capacity, he has acquired great wealth in cattle…
It has somehow become of great importance to him to place out portions of his herds among the tribesmen…Thus the Chiefs appear in the Brehon law as perpetually ‘giving stock,’ and the tribesmen as receiving it…It is by taking stock that the free Irish tribesman becomes the Ceile or Kyle, the vassal or man of his Chief, owing him not only rent but service and homage…
The new position which the tribesman assumed through accepting stock from a Chief varied according to the quantity of stock he received. If he took much stock he sank to a much lower status than if he had taken little. On this difference in the quantity accepted there turns the difference between the two great classes of Irish tenantry, the Saer and Daer tenants…
The Saer-stock tenant, distinguished by the limited amount of stock which he received from the Chief, remained a freeman and retained his tribal rights in their integrity. The normal period of his tenancy was seven years, and at the end of it he became entitled to the cattle which had been in his possession. Meantime he had the advantage of employing them in tillage, and the Chief on his part received the ‘growth and increase and milk,’…besides this it entitled the Chief to receive homage and manual labour; manual labour is explained to mean the service of the vassal in reaping the Chief’s harvest and in assisting to build his castle or fort, and it is stated that, in lieu of manual labour, the vassal might be required to follow his Chief to the wars.
Any large addition to the stock deposited with the Saer-stock tenant, or an unusual quantity accepted in the first instance by the tribesman, created the relation between vassal and chief called Daer-stock tenancy. The Daer-stock tenant had unquestionably parted with some portion of his freedom, and his duties are invariably referred to as very onerous. The stock given to him by the Chief consisted of two portions, of which one was proportionate to the rank of the recipient, the other to the rent in kind to which the tenant became liable…Beside the rent in kind and the feudal services, the Chief who had given stock was entitled to come, with a company of a certain number, and feast at the Dear-stock tenant’s house, at particular periods, for a fixed number of days…
…the relation out of which Daer-stock tenancy and its peculiar obligations arose was not perpetual. After food-rent and service had been rendered for seven years, if the Chief died, the tenant became entitled to the stock; while, on the other hand, if the tenant died, his heirs were partly, though not wholly, relieved from their obligation. At the same time it is very probable that Daer-stock tenancy, which must have begun in the necessities of the tenant, was often from the same cause rendered practically permanent…
…the effect of the ancient Irish relation was to produce, not merely a contractual liability, but a status. The tenant had his social and tribal position distinctly altered by accepting stock. Further, the acceptance of stock was not always voluntary. A tribesman, in one stage of Irish custom at all events, was bound to receive stock from his own ‘King,’ or, in other words, from the Chief of his tribe in its largest extension; and everywhere the Brehon laws seem to me to speak of the acceptance of stock as a hard necessity.
Once again we see that status is dependent upon credit/debt relationships. Over time, these relationships become solidified. The chief who distributes cattle to the tribe is also the chief who distributes booty in raids, and cattle rustling is a frequent theme in early Irish literature. We don’t know if the social structure of these ancient central European farming communities was close to that of tribal Ireland, but it may have been.
Another clue to the social structure comes from another finding:
By radio dating the teeth samples and comparing them with regional geographical radioactivity profiles, Stockhammer and his collaborators also determined where each person grew up. Traces of radioactive elements called isotopes are all around us, including in our food and water. From childhood, these elements are incorporated into our bones and can be used to determine where someone was raised. The results show that in nearly all of the households studied, there were females who hailed from elsewhere.
Whereas the remains suggest that farmsteads were passed through many generations of males—up to five in some cases—females only persisted in a community for one generation. This observation means a system of patrilocality was followed: men stayed in their place of upbringing, while women moved in with their husband’s family. Patrilocal cultures had previously existed, including far back in the Paleolithic, but the findings support the idea that the practice became more common as the organization of societies developed.
Stockhammer points out that social structure has long been a major topic in archeology and that countless studies have explored the communal interactions of ancient societies. Yet he feels the new study illuminates the transition of societal organization as we moved, from the late Stone Age to the Bronze Age, toward individual families living with those of a subservient class and women from other communities. “We added a new aspect to the current state of the art: the integration of genetic, isotopic and archaeological data, which helped us understand the complexity of past social structures,” Stockhammer says. Though he is resolute that his findings cannot directly be correlated with other ancient societies, he does draw a comparison with classical Greece’s oikos family structure and Rome’s familia, in which slaves and those of lower status were part of the family.
Indeed, Greece and Roman cultures initially developed out of such farming communities. The oikos and the familia were extended households that formed the smallest constituent part of these societies. They were united by kinship under the authority of the patriarch, as Maine argued in Ancient Law:
It would be a very simple explanation of the origin of society if we could … suppose that communities began to exist wherever a family held together instead of separating at the death of its patriarchal chieftain.
In most of the Greek states and in Rome there long remained the vestiges of an ascending series of groups out of which the State was at first constituted. The Family, House, and Tribe of the Romans may be taken as the type of them, and they are so described to us that we can scarcely help conceiving them as a system of concentric circles which have gradually expanded from the same point.
The elementary group is the Family, connected by common subjection to the highest male ascendant. The aggregation of Families forms the Gens or House. The aggregation of Houses makes the Tribe. The aggregation of Tribes constitutes the Commonwealth. Are we at liberty to follow these indications, and to lay down that the commonwealth is a collection of persons united by common descent from the progenitor of an original family?
Of this we may at least be certain, that all ancient societies regarded themselves as having proceeded from one original stock, and even laboured under an incapacity for comprehending any reason except this for their holding together in political union. The history of political ideas begins, in fact, with the assumption that kinship in blood is the sole possible ground of community in political functions; nor is there any of those subversions of feeling, which we term emphatically revolutions, so startling and so complete as the change which is accomplished when some other principle—such as that, for instance, of local contiguity—establishes itself for the first time as the basis of common political action.
What’s most interesting to me is the patrilocal method of residence. The Iroquois, as you recall, were matrilocal—tracing descent from the mother’s side and living with her clan. This latest study gives a boost to Engels’s theory that when matrilineal and matrilocal cultures were “overthrown” in favor of patriarchy, private property became inherited, giving rise to private property and inequality. Go back and read what he said about this in the previous post.
It’s worth noting that pastoral (cattle) cultures are—without any exception that I’m aware of—male dominated and patriarchal. Is the introduction of cattle the path to inequality? Contrast this with the matrilocal and (semi-) matriarchal culture of the Iroquois with their clan mothers. They had no large domesticated animals (which didn’t exist in North America), and practiced hoe-based farming, which was done mainly by women. It seems that this meant that women had higher status in that culture, with a subsequently flatter social hierarchy and less inequality of property.
As for when it was overthrown, Marija Gimbutas famously argued for years that “Old Europe” was matrilineal and matriarchal, and practiced “goddess worship” based on the large quantity of female figurines that she found. She then claimed that the Kurgan peoples swept in from the east and replaced the Old European culture with one that was much more warlike, patriarchal, and worshiped masculine gods. The farming peoples in this study would have been their descendants. They were also likely the ancestors of the various Indo-European cultures. While she may have overstated the importance of goddess-worship (on very little evidence), in many other respects Gimbutas may have been largely correct about the transition (she is backed up by recent DNA evidence). To what extent were these “low-status” individuals the earliest farmers and hunter-gatherers of Old Europe?
. . .Stockhammer believes marrying outside one’s community encouraged the cultural exchange of information, which ultimately led to the formation of new civilizations. Increasing social interactions with other communities allowed for a more efficient transfer of skills and goods to a wider population. “I am sure the fact that a large number of adult women from outside the society entered the society had an important effect—that new knowledge and technologies came with them,” he says.
Anthropologists and scientists from other fields refer to a concept called ratcheting, in which cultural information is not just shared and learned but also modified and improved. If ancient humans mingled with outside communities, countless kernels of know-how would have been borrowed and altered for both good and bad (more effective tools; more lethal weapons and warfare).
Individuals marrying outside of their community may have also made sense from the standpoint of genetic fitness and allowed local societies to thrive. Doing so would have prevented the genetic abnormalities that come from inbreeding and perhaps, in the long term, improved collective community survival.
Interestingly, our closest animal relatives, chimpanzees, are also female exogamous. That is, the females leave the community in which they were born, whereas the males stick around. Could this be a clue to human social relations? As a side note, even today, it feels like more women leave the places of their birth to seek out mates, while those who stay put are more often men. If I may be so bold, I suspect this is why women are so much more into travel than men are on average (there are exceptions; I’m one of them), and do so much more of it. In my failing Rust Belt city, for example, every woman not pregnant by twenty-one moves away to somewhere better, and only comes back to raise her kids (aside from the occasional boomerang).
Only humans can from these types of affinal relationships, and it does allow for much larger social agglomerations and transfer of information. Robin Dunbar talks about this in his book Human Evolution: Our Brains and Behavior. Chimps may be female exogamous, but there is no ongoing relationship between families, and hence no uniting of disparate chimp bands. Subsequently, there is no cultural or knowledge transfer between chimp bands; they are largely hermetically sealed. H. sapiens’ ability to overcome this limitation may have played a role in us coming to dominate the planet, and may have very deep roots, indeed.
The foundation of all of this may have been religion, specifically, the tutelary religion of the hearth, as Fustel de Coulanges eloquently describes in The Ancient City:
A family was composed of a father, a mother, children, and slaves. This group, small as it was, required discipline. To whom, then, belonged the chief authority? To the father? No. There is in every house something that is above the father himself. It is the domestic religion; it is that god whom the Greeks called the hearth master—εστια δεστοινα —whom the Romans called Lar familiaris. This divinity of the interior, or what amounts to the same thing, the belief that is in the human soul, is the least doubtful authority. This is what fixed rank in the family.
The father ranks first in presence of the sacred fire. He lights it, and supports it; he is its priest. In all religious acts his functions are the highest; he slays the victim, his mouth pronounces the formula of prayer which is to draw upon him and his the protection of the gods. The family and the worship are perpetuated through him; he represents, himself alone, the whole series of ancestors, and from hm are to proceed the entire series of descendants. Upon hm rests the domestic worship–he can almost say, like the Hindu, “I am the god.” When death shall come, he will be a divine being whom his descendants will invoke.
Consistent with the findings of the researchers about unrelated females moving in to male-centric houses, Fustel de Coulanges also found that Roman hearth religion had women leaving the house of their birth and becoming a part of their husband’s family:
This religion did not place woman in so high a rank. The wife takes part in the religious acts, indeed, but she is not the mistress of the hearth. She does not derive her religion from her birth. She is initiated into it at her marriage. She has learned from her husband the prayer that she pronounces. She does not represent the ancestors, since she is not descended from them. She herself will not become an ancestor, placed in the tomb, she will not receive special worship. In death, as in life, she counts only as a part of her husband.
Greek law, Roman law, and Hindu Law, all derived from this old religion, agree on considering the wife as always a minor. She could never have a hearth of her own; she was never the chief of a worship. At Rome she received the title of mater familial; but she lost this if her husband died. Never having a sacred fire which belonged to her, she had nothing of what gave authority in the house. She never commanded; she was never even free, or mistress of herself. She was always near the hearth of another, repeating the prayer of another, for all the acts of religious life she needed a superior, and for all the acts of civil life a guardian. pp. 68-69
And getting back to the initial theme of passing property down via inheritance seen in the burials of these communities, that too seems to have been intimately connected to the religious worship of the hearth according to Coulanges:
There are three things which, from the most ancient times, we find founded and solidly established in Greek and Italian societies: the domestic religion; the family; and the right of property — three things which had in the beginning a manifest relation, and which appear to have been inseparable. The idea of private property existed in the religion itself. Every family had its hearth and its ancestors. These gods could be adored only by this family and protected it alone. They were its property.
Now, between these gods and the soil, men of the early ages saw a mysterious relation. Let us first take the hearth. This altar is the symbol of a sedentary life; its name indicates this. It must be placed upon the ground; once established, it cannot be moved…The god is installed there not for a day, not for the life of one man merely, but for as long a time as this family shall endure, and there remains any one to support its fire by sacrifices, This the sacred fire takes possession of the soil, and marked it its own. It is the god’s property.
And the family, which through duty and religion remains grouped around its altar, is as much fixed to the soil as the altar itself. The idea of domicile follows naturally. The family is attached to the altar, the later is attached to the soil; an intimate relation, therefore, is established between the soil and the family. There must be his permanent home, which he will not dream of quitting, unless an unforeseen necessity constrains him to it. Like the hearth, it will always occupy this spot. This spot belongs to it, is its property, the property not simply of a man, but of a family, whose different members must, one after another, be born and die here. p. 48
Is this the origin of private property? In ancient Rome, when land (and slaves) were transferred between owners, such a transfer was accompanied by a “solemn ceremony” called mancipatio (the origin of the word emancipation). Over time, these became replaced by cash transfers and real estate markets, and inequality ran amok, eventually leading to Rome’s downfall.
How closely did these ancient European farming cultures resemble that of the ancient Greeks and Romans? After all, they were both based around farming and cattle-rustling. We can only speculate, as culture does not calcify unlike the elements in bones and teeth. The researchers’ invoking of Greek and Roman culture is telling, however. It certainly seems like they may have been quite similar. Hopefully, new methods like those used in the article will give us even more data to work with, as they hope:
University of Michigan archeologist Alicia Ventresca Miller, who was not involved in the paper, shares Stockhammer’s enthusiasm and feels this new work reveals a lot about early human inheritance of goods and property. “As far as I can tell, there are no other studies that have such large sample sizes and multiple analyses to come to these conclusions, especially for prehistoric groups,” she says. “Their finding that wealth was inherited, rather than achieved, has real impacts for research on inequality and will likely change our understanding of ancient Europe. The results give us insight into the complexity of ancient lifeways.”
Krishna Veeramah, a population geneticist in the department of ecology and evolution at Stony Brook University, who was also not involved in the study, thinks the new multidisciplinary research approach may serve as a model for future work, especially as characterizing ancient DNA becomes more affordable and widespread.
On a related note, it seems that these pre-state cultures were hardly static. Over at Peter Turchin’s blog, he writes:
As the readers of this blog know, a big chunk of my research focuses on why complex societies go through cycles of alternating internally peaceful, or integrative, phases and turbulent, or disintegrative periods. In all past state-level societies, for which we have decent data, we find such “secular cycles” (see more in our book Secular Cycles).
What was a surprise for me was to find that pre-state societies also go through similar cycles. Non-state centralized societies (chiefdoms) cycle back and forth between simple (one level of hierarchy below the chief) and complex (two or more hierarchical levels) chiefdoms. But now evidence accumulates that even non-centralized, non-hierarchical societies cycle. The work by archaeologists, such as Stephen Shennan, showed that various regions within Europe went through three or four population cycles before the rise of centralized societies (see, for example, his recent book The First Farmers of Europe).
These cycles were quite drastic in amplitude. For example, last month at a workshop in Cologne, I learned from archaeologists working in North Rhine that population declines there could result in regional abandonment. Several hypotheses have been advanced, including the effects of climate fluctuations, or soil exhaustion. But there is no scientific consensus—this is a big puzzle.
The authors of the paper hypothesize that as power became too centralized, the various families and social groups comprising the culture simply dispersed rather than become subservient to permanent despotic power. Turchin thinks it was warfare, specifically protection of surpluses from nomadic outsiders:
First, why did the different groups move together in the first place? From almost any point of view, except one [defense], this was a really poor decision. Such crowding together resulted in serious problems with sanitation and disease. Additionally, farmers had to waste a lot of time traveling to their fields, because such huge settlement required a lot of land to support it. The only reason for such population concentration that makes sense to me is collective defense…
The second question is that at the end of the mega-settlement period, the population didn’t simply disperse out; there was a very substantial population collapse. Again, what was the reason for this? In historical periods the usual answer is pervasive endemic warfare. Not only war kills people, its effect on demography is even more due to the creation of a “landscape of fear,” which doesn’t permit farmers to cultivate fields, so that the local population gradually starves, has fewer babies, and is further diminished by out-migration…
However, the former hypothesis is consistent with James C. Scott’s ideas that people in early farming cultures were often looking for a way to get out from the bitter toil and backbreaking work of farming by abandoning it and becoming “barbarians.” This, he says, happened whenever authority became too coercive for too long. Those stockade walls were to keep the farmers in, not the barbarians out. Slate Star Codex recently reviewed Scott’s book:
Scott thinks of these collapses not as disasters or mysteries but as the expected order of things. It is a minor miracle that some guy in a palace can get everyone to stay on his fields and work for him and pay him taxes, and no surprise when this situation stops holding. These collapses rarely involved great loss of life. They could just be a simple transition from “a bunch of farming towns pay taxes to the state center” to “a bunch of farming towns are no longer paying taxes to the state center”. The great world cultures of the time – Egypt, Sumeria, China, whereever – kept chugging along whether or not there was a king in the middle collecting taxes from them. Scott warns against the bias of archaeologists who – deprived of the great monuments and libraries of cuneiform tablets that only a powerful king could produce – curse the resulting interregnum as a dark age or disaster. Probably most people were better off during these times.
The book ends with a chapter on “barbarians”. Scott reminds us that until about 1600, the majority of human population lived outside state control; histories that focus on states and forget barbarians are forgetting about most humans alive. In keeping with his thesis, Scott reviews some ancient sources that talk about barbarians in the context of people who did not farm or eat grain. Also in keeping with his thesis, he warns against thinking of barbarians as somehow worse or more primitive. Many barbarians were former state citizens who had escaped state control to a freer and happier lifestyle. Barbarian tribes could control vast trading empires, form complex confederations, and enter in various symbiotic relationships with the states around them. Scott wants us to think of these not as primitive people vs. advanced people, but as two different interacting lifestyles, of which the barbarian one was superior for most people up until a few centuries ago.
Speaking of reviews, I’ve finished reading Civilized to Death, and I suppose I should write a review. It’s no secret that I’m very partial to it’s thesis, but highlighting some especially relevant parts might be enlightening.
It’s here that we finally get to what’s really the heart of this entire series of posts, which is this: in the West, paper money has been an instrument of revolution.
Both the American and French Revolutions were funded via paper money, and it’s very likely they could not have succeeded without it. It allowed new and fledgling regimes to command necessary resources and fund their armies, which allowed them to take on established states. While such states have mints, a tax base, ownership of natural resources, the ability to write laws, etc.; a rebellion against an established order has none of these things. So, to raise funds, the ability to issue IOUs as payment makes being able to start a revolution far more likely. As we’ve already seen, just about every financial innovation throughout history came about due to the costs of waging wars. Paper money was no exception.
One might even go so far as to say that the American, French and Russian Revolutions would never have been able to happen at all without the invention of paper money!
The British government passed laws which forbade the issuing and circulation of paper money in the colonies. The monetary experiments came to an end. As you might expect, the domestic economy shrank, and commerce was severely constricted. Of course, the colonists became quite angry at this turn of events.
British authorities initially viewed colonial paper currency favorably because it supported trade with England, but following New England’s “great inflation” in the 1740s, this view changed. Parliament passed the Currency Act of 1751 to strictly limit the quantity of paper currency that could be issued in New England and to strengthen its fiscal backing.
The Act required the colonies to retire all existing bills of credit on schedule. In the future, the colonies could, at most, issue fiat currencies equal to one year’s worth of government expenditures provided that they retired the bills within two years. During wars, colonies could issue larger amounts, provided that they backed all such issuances with taxes and compensated note holders for any losses in the real value of the notes, presumably by paying interest on them.
As a further important constraint on the colonies’ monetary policies, Parliament prohibited New England from making any fiat currency legal tender for private transactions. In 1764, Parliament extended the Currency Act to all of the American colonies.
Paper Money and Inflation in Colonial America (Owen F. Humpage, Economic Commentary, May 13, 2015)
To get around the prohibition on governments issuing paper notes as IOUs, banking may have filled the void. But that option was also cut off by the British government. Last time we saw that the South Sea Bubble, along with a panoply of related schemes, had nearly taken down the entire British economy (as it had done in France). In response, Parliament passed the Bubble Act, which forbade any chartered corporations except those expressly authorized by a Royal Charter. This effectively put the kibosh on banking as an alternative source of paper money in the American colonies.
Given their instinct for experiment in monetary matters, it would have been surprising if the colonists had not discovered or invented banks. They did, and their enthusiasm for this innovation would have been great had it not also been systematically curbed.
In the first half of the eighteenth century the New England colonies, along with Virginia and South Carolina, authorized banking institutions. The most famous, as also the most controversial of these, was the magnificently named Land Bank Manufactory Scheme of Massachusetts which, very possibly, owed something to the ideas of John Law.
The Manufactory authorized the issue of bank notes at nominal interest to subscribers to its capital stock – the notes to be secured, more or less, by the real property of the stockholders. The same notes could be used to pay back the loan that their issue had incurred. This debt could also be repaid in manufactured goods or produce, including that brought into existence by the credit so granted.
The Manufactory precipitated a bitter dispute in the colony. The General Court was favorable, a predisposition that was unquestionably enhanced by the award of stock to numerous of the legislators. Merchants were opposed. In the end, the dispute was carried to London.
In 1741, the Bubble Acts – the British response, as noted, to the South Sea Company and associated promotions and which outlawed joint-stock companies not specifically authorized by law – were declared to apply to the colonies. It was an outrageous exercise in post-facto legestlation, one that helped inspire the Constitutional prohibition against such laws. However, it effectively ended the colonial banks. (Galbraith, pp. 56-57)
Benjamin Franklin, as we have seen, was a longstanding advocate of paper money. He wrote treatises on the subject, and even printed some of it on behalf of the government of Pennsylvania. It was this paper money, he argued, that was the cause of the colonies’ general prosperity in contrast to the widespread poverty and discontent he witnessed everywhere in England:
Before the war, the colonies sent Benjamin Franklin to England to represent their interests. Franklin was greatly surprised by the amount of poverty and high unemployment. It just didn’t make sense, England was the richest country in the world but the working class was impoverished, he wrote “The streets are covered with beggars and tramps.”
It is said that he asked his friends in England how this could be so, they replied that they had too many workers. Many believed, along with Malthus, that wars and plague were necessary to rid the country from man-power surpluses.
“We have no poor houses in the Colonies; and if we had some, there would be nobody to put in them, since there is, in the Colonies, not a single unemployed person, neither beggars nor tramps.” – Benjamin Franklin
He was asked why the working class in the colonies were so prosperous.
“That is simple. In the Colonies, we issue our own paper money. It is called ‘Colonial Scrip.’ We issue it in proper proportion to make the goods and pass easily from the producers to the consumers. In this manner, creating ourselves our own paper money, we control its purchasing power and we have no interest to pay to no one.” – Benjamin Franklin
Soon afterward, the English bankers demanded that the King and Parliament pass a law that prohibited the colonies from using their scrip money. Only gold and silver could be used which would be provided by the English bankers. This began the plague of debt based money in the colonies that had cursed the English working class.
The first law was passed in 1751, and then a harsher law was passed in 1763. Franklin claimed that within one year, the colonies were filled with unemployment and beggars, just like in England, because there was not enough money to pay for the goods and work. The money supply had been cut in half.
William Pitt’s prosecution of the war was conducted by running up government debt, and the settlement of this debt after the war’s conclusion required the raising of taxes by Parliament. Since, from Britain’s view, the war had been fought in order to protect its colonies, it felt that it was only fair that the colonies bore some of the financial burden. Colonial scrip was useless to Parliament in this regard, as was barter. The repayment of British lenders to the Crown could only be done in specie.
The colonies, as you correctly pointed out, did not have this in any significant quantity, although in the view of British authorities this was the colonies’ problem and not theirs. This policy also came on the heels of the approach of benign neglect conducted by Robert Walpole as Prime Minister, under which the colonies were allowed to do pretty much as they pleased so long as their activities generally benefited the British Crown. It should also be noted here that demands of payment of taxes in hard currency is a common tactic for colonial powers to undermine local economies and customs. It played that role in fomenting the American Revolution as well as the Whiskey Rebellion of the new Constitutional republic, not to mention how it was used in South Africa to compel natives participating in a traditional economy to abandon their lands and take up work as laborers in the gold mines.
Now, it would be unreasonable to say that this was THE cause of the American Revolution. In school, we’re taught that that taxes were the main cause: “No taxation without representation” went the slogan (and precipitated the Boston Tea Party). We’re also told that the colonists were much aggrieved by high customs duties, such as those of the unpopular Stamp Act.
But the suppression of paper money and local currency issuance by the British government appears to have been just as much of a cause, although it is probably unknown by the vast majority of Americans. The reason for this strange omission is unexplained. Galbraith thinks that that more conservative attitudes towards money creation in modern times have caused even American historians to argue that the British authorities were largely correct in their actions!
English historian, John Twells, wrote about the money of the colonies, the colonial Scrip:
“It was the monetary system under which America’s Colonies flourished to such an extent that Edmund Burke was able to write about them: ‘Nothing in the history of the world resembles their progress. It was a sound and beneficial system, and its effects led to the happiness of the people.
In a bad hour, the British Parliament took away from America its representative money, forbade any further issue of bills of credit, these bills ceasing to be legal tender, and ordered that all taxes should be paid in coins. Consider now the consequences: this restriction of the medium of exchange paralyzed all the industrial energies of the people. Ruin took place in these once flourishing Colonies; most rigorous distress visited every family and every business, discontent became desperation, and reached a point, to use the words of Dr. Johnson, when human nature rises up and assets its rights.”
Peter Cooper, industrialist and statesman wrote:
“After Franklin gave explanations on the true cause of the prosperity of the Colonies, the Parliament exacted laws forbidding the use of this money in the payment of taxes. This decision brought so many drawbacks and so much poverty to the people that it was the main cause of the Revolution. The suppression of the Colonial money was a much more important reason for the general uprising than the Tea and Stamp Act.”
Our Founding Fathers knew that without financial independence and sovereignty there could be no other lasting freedoms. Our freedoms and national sovereignty are being lost because most people do not understand our money system…
If paper money was the cause of the American Revolution, it was also the solution. The Continental Congress issued IOUs to pay for the war – called ‘Continental notes’ or ‘Continental scrip’:
With independence the ban by Parliament on paper money became, in a notable modern phrase, inoperative. And however the colonies might have been moving towards more reliable money, there was now no alternative to government paper…
Before the first Continental Congress assembled, some of the colonies (including Massachusetts) had authorized note issues to pay for military operations. The Congress was without direct powers of taxation; one of its first acts was to authorize a note issue. More states now authorized more notes.
It was by these notes that the American Revolution was financed….
Robert Morris, to whom the historians have awarded the less than impeccable title of ‘Financier of the Revolution’, obtained some six-and-a-half million dollars in loans from France, a few hundred thousand from Spain, and later, after victory was in prospect, a little over a million from the Dutch. These amounts, too, were more symbolic than real. Overwhelmingly the Revolution was paid for with paper money.
Since the issues, Continental and state, were far in excess of any corresponding increase in trade, prices rose – at first slowly and that, after 1777, at a rapidly accelerating rate…Eventually, in the common saying, ‘a wagon-load of money would scarcely purchase a wagon-load of provisions’. Shoes in Virginia were $5000 a pair in the local notes, a full outfit of clothing upwards of a million. Creditors sheltered from their debtors like hunted things lest they be paid off in worthless notes. The phrase ‘not worth a Continental’ won its enduring place in American language. (Galbraith, pp. 58-59)
Despite this painful bout of hyperinflation, as Galbraith notes, there was simply no other viable alternative to fund the Revolutionary War at the time:
Thus the United States came into existence on a full tide not of inflation but of hyperinflation – the kind of inflation that ends only in the money becoming worthless. What is certain, however, is the absence of any alternative.
Taxes, had they been authorized by willing legislators on willing people, would have been had, perhaps impossible to collect in a country of scattered population, no central government, not the slightest experience in fiscal matters, no tax-collection machinery and with its coasts and numerous of its ports and customs houses under enemy control.
And people were far from willing. Taxes were disliked for their own sake and also identified with foreign oppression. A rigorous pay-as-you-go policy on the part of the Continental Congress and the states might well have caused the summer patriots (like the monetary conservatives) to have second thoughts about the advantages of independence.
Nor was borrowing an alternative. Men of property, then the only domestic source, had no reason to think the country a good risk. The loans from France and Spain were motivated not by hope of return but by malice towards an ancient enemy.
So only the notes remained. By any rational calculation, it was the paper money that saved the day. Beside the Liberty Bell there might well be a tasteful replica of a Continental note. (Galbraith, p. 60)
While this is often used as yet another cautionary tale of “government money printing” by libertarians and goldbugs, a couple of things need to be noted. The first, and most obvious is the fact that: without government money printing there would be no United States. That seems like an important point to me.
The second is a take from Ben Franklin himself. He argued that inflation is really just a sort of tax by another name. And, as opposed to “conventional” government taxation, the inflationary tax falls more broadly across the population, meaning that it was actually a more even-handed and fair method of taxation!
And you can kind of see his point. With legislative taxes, government always has to decide who and what to tax—and how much. This inevitably means that the government picks winners and losers by necessity. Sometimes this can be done wisely, but in practice it often is not. But an inflationary tax cannot be easily controlled by government legislation to favor privileged insiders, unlike more conventional methods of direct taxation, where the rich and well-connected are often spared much of the burden thanks to undue influence over legislators:
From 1776 to 1785 Franklin serves as the U.S. representative to the French court. He has the occasion to write on one important monetary topic in this period, namely, the massive depreciation of Congress’ paper money — the Continental dollar — during the revolution. In a letter to Joseph Quincy in 1783, Franklin claims that he predicted this outcome and had proposed a better paper money plan, but that Congress had rejected it.
In addition, around 1781 Franklin writes a tract called “Of the Paper Money of America.” In it he argues that the depreciation of the Continental dollar operated as an inflation tax or a tax on money itself. As such, this tax fell more equally across the citizenry than most other taxes. In effect, every man paid his share of the tax according to how long he retained a Continental dollar between the time he received it in payment and when he spent it again, the intervening depreciation of the money (inflation in prices) being the tax paid.
I’m not sure that many people would agree with that sentiment today, but it is an interesting take on the matter.
Once the war was won, and with the Continental notes inflating to zero, the new fledgling government could now issue money for real. The first government building constructed by the new government was the mint. The power to tax was authorized by Congress.
Although the war ended in 1783, the finances of the United States remained somewhat chaotic through the 1780s. In 1781, successful merchant Robert Morris was appointed superintendent of finance and personally issued “Morris notes”—commonly called Short and Long Bobs based on their tenure or time to maturity—and thus began the long process to reestablish the government’s credit.
In 1785, the dollar became the official monetary unit of the United States, the first American mint was established in Philadelphia in 1786, and the Continental Congress was finally given the power of taxation to pay off the debt in 1787, thus bringing together a more united fiscal, currency, and monetary policy.
“Taler” became a common name for currency because so many German states and municipalities picked it up. During the sixteenth century, approximately 1,500 different types of taler were issued in the German-speaking countries, and numismatic scholars have estimated that between the minting of the first talers in Jáchymov and the year 1900, about 10,000 different talers were issued for daily use and to commemorate special occasions.
The most famous and widely circulated of all talers became known as the Maria Theresa taler, struck in honor of the Austrian empress at the Gunzberg mint in 1773…The coin…became so popular, particularly in North Africa and the Middle East that, even after she died, the government continued to mint it with the date 1780, the year of her death.
The coin not only survived its namesake but outlived the empire that had created it. In 1805 when Napoleon abolished the Holy Roman Empire, the mine at Gunzberg closed, but the mint in Vienna continued to produce the coins exactly as they had been with the same date, 1780, and even with the mintmark of the closed mint. The Austro-Hungarian government continued to mint the taler throughout the nineteenth century until that empire collapsed at the end of World War I.
Other countries began copying the design of the Maria Theresa taler shortly after it went into circulation. They minted coins of a similar size and put on them a bust of a middle-aged woman who resembled Maria Theresa. Of they did not have a queen of their own who fit the description, they used an allegorical female such as the bust of Liberty that appeared on many U.S. coins of the nineteenth century.
The name dollar penetrated the English language via Scotland. Between 1567 and 1571, King James VI issued a thirty-shilling piece that the Scots called the sword dollar because of the design on the back of it. A two-mark coin followed in 1578 and was called the thistle dollar.
The Scots used the name dollar to distinguish their currency, and thereby their country and themselves, more clearly from their domineering English neighbors to the south. Thus, from very early usage, the word dollar carried with it a certain anti-English or antiauthoritarian bias that many Scottish settlers took with them to their new homes in the Americas and other British colonies. The emigration of Scots accounts for much of the subsequent popularity of the word dollar in British colonies around the world… (Weatherford, History of Money, pp. 115-116)
In 1782, Thomas Jefferson wrote in his Notes on a Money Unit of the U.S. that “The unit or dolar is a known coin and most familiar of all to the mind of the people. It is already adopted from south to north.”
The American colonists became so accustomed to using the dollar as their primary monetary unit that, after independence, they adopted it as their official currency. On July 6, 1785, the Congress declared that “the money unit of the United States of America be one dollar.” Not until April 2, 1792, however, did Congress pass a law to create an American mint, and only in 1794 did the United States begin minting silver dollars. The mint building, which was started soon after passage of the law and well before the Capitol or White House, became the first public building constructed by the new government of the United States. (Weatherford, History of Money, p. 118)
In the nineteenth century, there were strong arguments around the establishment of a central bank in the United States. One was, in fact, chartered, and then its charter was later revoked. We’ll talk a little about this in the final entry of this series next time, but for now, it is beyond the scope of this post.
In the late eighteenth century, France’s financial circumstances were still very dire. It constantly needed to raise money for its perennial wars with England who, as we saw earlier, successfully funded its own wars with paper money and state borrowing via the Bank of England an—option not available to France in the wake of the Mississippi Bubble’s collapse and the failure of John Law’s Banque Royale. France’s generous loan to the United States’ revolutionaries may have been well appreciated by us Americans, but in retrospect, it was probably not the best move considering France’s fiscal situation (plus the fact that Revolution would soon engulf it; something the French aristocracy obviously had no way of knowing at the time).
In the aftermath of the Revolution, the National Assembly repudiated the King’s debts. It also suspended taxation. But it still badly needed money, especially since many of the countries surrounding France (e.g. Austria, Prussia, Great Britain, Spain and several other monarchies) declared war on it soon after the King met the guillotine. The answer they came up with was, once again, monetizing land. In this case, it was the land seized from the Catholic Church by the Revolutionary government. “[T]he National Assembly agreed that newly nationalised properties in the form of old church land could be purchased through the use of high-denomination assignats, akin to interest-bearing government bonds, mortgaged (assignée) on the property.”
The Estates-General had been summoned in consequence of the terrible fiscal straits of the realm. No more could be borrowed. There was no central bank which could be commanded to take up loans. All still depended on the existence of willing lenders or those who could be apprehended and impressed with their duty.
The Third Estate could scarcely be expected to vote new or heavier levies when its members were principally concerned with the regressive harshness of those then being collected. In fact, on 17 June 1789 the National Assembly declared all taxes illegal, a breathtaking step softened by the provision that they might be collected on a temporary basis.
Meanwhile memories of John Law kept Frenchmen acutely suspicious of ordinary paper money; during 1788, a proposal for an interest-bearing note issue provoked so much opposition that it had to be withdrawn. But a note issue that could be redeemed in actual land was something different. The clerical lands were an endowment by Heaven of the Revolution.
The decisive step was taken on 19 December 1789. An issue of 400 million livres was authorized; it would, it was promised, ‘pay off the public debt, animate agriculture and industry and have the lands better administered’. These notes, the assignats, were to be redeemed within five years from the sale of an equivalent value of the lands of the Church and the Crown.
The first assignats bore interest at 5 per cent; anyone with an appropriate amount could use them directly in exchange for land. In the following summer when a new large issue was authorized, the interest was eliminated. Later still, small denominations were issued.
There were misgivings. The memory of Law continued to be invoked. An anonymous American intervened with Advice on the Assignats by a Citizen of the United States. He warned the Assembly against the assignats out of the rich recent experience of his own country with the Continental notes. However, the initial response to the land-based currency was generally favourable.
Had it been possible to stop with the original issue or with that of 1790, the assignats would be celebrated as a remarkably interesting innovation. Here was not a gold, silver or tobacco standard but one based solidly and logically on the good soil of France.
Purchasing power in the first years had stood up well. There was admiring talk of how the assignats had put land into circulation. And business had improved, employment had increased and sales of the Church and other public lands had been facilitated. On occasion, sales had been too good. In relation to annual income, the prices set were comparatively modest; speculators clutching large packages of the assignats had arrived to take advantage of the bargains.
However, in France, as earlier in America, the demands of revolution were insistent. Although the land was limited, the claims upon it could be increased.
The large issue of 1790 was followed by others – especially after war broke out in 1792. Prices denominated in assignats now rose; their rate of exchange for gold and silver, dealing in which had been authorized by the Assembly, declined sharply. In 1793 and 1794, under the Convention and the management of Cambon, there was a period of stability. Prices were fixed with some success. What could have been more important, the supply of assignats was curtailed by the righteous device of repudiating those that had been issued under the king. In those years they retained a value of around 50 per cent of their face amount when exchanged for gold and silver.
Soon, however, need again asserted itself. More and more were printed. In an innovative step in economic warfare, Pitt, after 1793, allowed the royalist emigres to manufacture assignats for export to France. This, it was hoped, would hasten the decay.
In the end, the French presses were printing one day to supply the needs of the next. Soon the Directory halted the exchange of good real estate for the now nearly worthless paper – France went off the land standard. Creditors were also protected from having their debts paid in assignats. This saved them from the ignominy of having (as earlier in America) to hide out from their debtors. (Galbraith, pp. 64-66)
The lands of aristocrats who had fled France was confiscated as well and used to back further issuances of paper currency. Despite this, as with the Continentals, the value of the assignats soon inflated away to very little. France then issued a new paper money, the mandats territoriaux, also carrying an entitlement to land, in an attempt to stabilize the currency. But distrust in the paper currency (and in the government) was so endemic that the mandats began to depreciate even before they were issued:
With the sale of the confiscated property, a great debtor class emerged, which was interested in further depreciation to make it cheaper to pay back debts. Faith in the new currency faded by mid-year 1792. Wealth was hidden abroad and specie flowed to surrounding countries with the British Royal Mint heavily purchasing gold, particularly in 1793 and 1794.
But deficits persisted and the French government still needed to raise money, so in 1792, it seized the land of emigrants and those who had fled France, adding another 2 billion livres or more to French assets. War with Belgium that year was largely self-funded as France extracted some rents, but not so for the war with England in 1793. Assignats no longer circulated as a medium of payment, but were an object of speculation. Specie was scarce, but sufficient, and farmers refused to accept assignats, which were practically demonetized. In February 1793, citizens of Paris looted shops for bread they could no longer afford, if they could find it at all.
In order to maintain its circulation, France turned to stiff penalties and the Reign of Terror extended into monetary affairs. During the course of 1793, the Assembly prohibited buying gold or silver at a premium, imposed a forced loan on a portion of the population, made it an offense to sell coin or differentiate the price between assignats and coin, and under the Law of the Maximum fixed prices on some commodities and mandated that produce be sold, with the death penalty imposed for infractions.
France realized that to restore order, the volume of paper money in circulation must decrease. In December 1794, it repealed the Law of the Maximum. In January 1795, the government permitted the export of specie in exchange for imports of staple goods. Prices fluctuated wildly and the resulting hyperinflation became a windfall for those who purchased national land with little money down. Inflation peaked in October 1795. In February 1796, in front of a large crowd, the assignat printing plates were destroyed.
By 1796, assignats gave way to specie and by February 1796, the experiment ended. The French tried to replace the assignat with the mandat, which was backed by gold, but so deep was the mistrust of paper money that the mandat began to depreciate before it was even issued and lasted only until February 1797…
…In February 1797 (16 Pluvoise year V), the Directory returned to gold and silver. But by then the Revolution was an accomplished fact. It had been financed, and this the assignats had accomplished. They have at least as good a claim on memory as the guillotine. (Galbraith, p. 66)
Eventually, France’s money system stabilized once its political situation more-or-less stabilized, but entire books have been written about that subject. The military dictatorship of Napoleon Bonaparte sold off the French lands in North America to the United States to raise money for its wars of conquest on the European continent. Napoleon also finally established a central bank in France based on the British model.
In 1800, the lingering suspicion of the French of such institutions had yielded to the financial needs of Napoleon. There had emerged the Banque de France which, in the ensuing century, developed in rough parallel with the Bank of England. In 1875, the former Bank of Prussia became the Reichsbank. Other countries had acquired similar institutions or soon did…(Galbraith, p. 41)
It might to be going too far to say that without paper money, neither the American or French revolutions would have ever happened. But nor is entirely absurd to say that this may well be the case. It’s certainly doubtful that they would have succeeded. It’s difficult to imagine how much history would be different today had it not been for paper money and its role in revolution.
Paper money would continue to play that role throughout the Age of Revolutions well into the Twentieth Century, as Galbraith notes:
Paper was similarly to serve the Soviets in and after the Russian Revolution. By 1920, around 85 per cent of the state budget was being met by the manufacture of paper money…
In the aftermath of the Revolution the Soviet Union, like the other Communist states, became a stern defender of stable prices and hard money. But the Russians, no less than the Americans or the French, owe their revolution to paper.
Not that the use of paper money is a guarantee of revolutionary success. In 1913, in the old Spanish town of Chihuahua City, Pancho Villa was carrying out his engaging combination of banditry and social reform. Soldiers were cleaning the streets, land was being given to the peons, children were being put in schools and Villa was printing paper money by the square yard.
This money could not be exchanged for any better asset. It promised nothing. It was sustained by no residue of prestige or esteem. It was abundant. Its only claim to worth was Pancho Villa’s signature. He gave this money to whosoever seemed to be in need or anyone else who struck his fancy. It did not bring him success, although he did, without question, enjoy a measure of popularity while it lasted. But the United States army pursued him; more orderly men intervened to persuade him to retire to a hacienda in Durango. There, a decade later, when he was suspected by some to be contemplating another foray into banditry, social reform, and monetary policy, he was assassinated. (Galbraith, pp. 66-67)
Given that both the Continentals and the assignats both suffered from hyperinflation towards the end, they have been often held up as a cautionary tale: governments are inherently profligate and can not be trusted with money creation; only by strictly pegging paper money issuance to a cache of gold stashed away in vault somewhere can hyperinflation be avoided.
As Galbraith notes, this is highly selective. Sure, if you look just for instances of paper money overissuance and inflation you will find them. But this is also deliberately ignoring instances–often lasting for decades if not for centuries–that paper money functioned exactly as intended all across the globe; from ancient China, to colonial America, to modern times. It emphasizes the inflationary scare stories, but intentionally ignores the very real stimulus to commercial activity that paper money has provided, as opposed to the extreme constraints of a precious metal standard. It also totally ignores any extenuating circumstances in hyperinflations, such as Germany’s repayment of war debt in the twentieth century, or persistent economic warfare in the case of Venezuela today.
So the attitude that “government simply can’t be trusted” is more of a political opinion than something based on historical facts.
…in the minds of some conservatives…there must have been a lingering sense of the singular service that paper money had, in the recent past, rendered to revolution. Not only was the American Revolution so financed. So also was the socially far more therapeutic eruption in France. If the French citizens had been required to act within the canons of conventional finance, they could not, any more than the Americans, acted at all. (Galbraith, pp. 61-62)
The desire for a gold standard comes from a desire to anchor the value of money in something outside of the control of governments. But, of course, pegging the value of currency to a certain arbitrary amount of gold is a political choice. Nor does it guarantee price stability–the value of gold fluctuates. A gold standard is more of a guarantee of the stability of the price of gold than the stability of the value of money. Also, in almost every case of war and economic depression in modern history, the gold standard is immediately chucked into the trashbin.
The other thing worth noting is that the worth of paper money is related to both issues of supply AND demand. Often, it’s not just that there is too much supply of currency. It’s that people refuse to accept the currency, leading just as assuredly to a loss in value.
And the lack of acceptance is usually driven by a lack of faith in the issuing government. You can see why this might be the case for assignats and Continentals. Both were revolutionary governments whose very stability and legitimacy was in question, particularly in France. If the government issuing the currency (which are IOU’s, remember) may not be around a year from now, then how willing are you to accept that currency? James Madison pointed out that the value of any currency was mostly determined by faith in the credit of the government issuing the currency. That’s why he and other Founding Fathers worked so hard to reestablish the credit of the United States following the Continental note debacle.
As Rebecca Spang—the author of “Stuff and Money in the Time of the French Revolution”—notes, many people in revolutionary France were vigorously opposed to the seizing of Church property. Thus, they would not accept the validity of notes based on their value. This led to a lack of acceptance which contributed just as much to hypernflation as did any profligacy on the part of the government:
Revolutionary France became a paradigm case for the quantity theory of money, the view that prices are directly and proportionately correlated with the amount of money in circulation, and for the deleterious consequences of letting the latter run out of control.
Yet Spang shows that such neat economic interpretations are inadequate. At times, for example, prices rose first and politicians boosted the money supply in response.
Spang reiterates that the first assignats were neither a revolutionary policy nor a form of paper money. But as her stylishly crafted narrative makes clear, this soon changed. Politicians made the cardinal error of thinking that the state could be stabilised by in effect destabilising its money.
Popular distrust of the “real” worth of assignats prompted a contagion of fraud, suspicion and uncertainty. How could one tell a fake assignat, when technology couldn’t replicate them precisely? How could they even be used, when there was no compulsion beyond patriotic duty for sellers to accept them as payment? Small wonder that so many artists made trompe l’oeil images out of them — what looked solid and real was anything but…
Note that the situation of a stable government is totally different. Britain’s government was eminently stable compared to the United States and France at that time, hence its money retained most of its value, even when convertibility was temporarily suspended. This also underlies the value of Switzerland’s currency today, since they have a legendarily stable, neutral government (and really not that much in the way of actual resources).
So those who argue that America’s “fiat” money is no good would somehow have to make the case that the United States government is somehow more illegitimate or more unstable than the governments of other wealthy, industrialized nations. To my mind, this is tangential to treason. Yet no one ever calls them out on this point. From that standpoint, the biggest threat to the money supply comes not from overissuance (hyperinflation is nowhere to be seen), but from undermining the faith in, and credit of, the United States government. That’s been done exclusively by Republicans in recent years by grandstanding over the debt ceiling—an artificial borrowing constraint imposed during the United States’ entry into World War One. Really, this should be considered an unpatriotic and treasonous act. It almost certainly would have been perceived as such by the Founding Fathers.
I always have the same response to libertarians who sneer at the “worthlessness” of government fiat money. My response is this: if you truly believe it is worthless, then I will gladly take it off your hands for you. Please hand over all the paper money you have in your wallet right now at this very moment, as well as all the paper money you may have lying around your house. If you want, you can even take out some “worthless” paper money from the nearest ATM and hand it over to me too; I’ll gladly take that off your hands as well. You can give me as much as you like.
To date, I have yet to have a libertarian take me up on that offer. I wonder why?
Next: The Civil War finally establishes a national paper currency for the U.S.
France ended up conducting its own monetary experiment with paper money at around the same time as the American colonies in the early 1700s. Unlike the American experiment, it was not successful. It would be initiated by an immigrant Scotsman fleeing a murder charge (and gambling addict) by the name of John Law. (Jean Lass in French).
At this time—the early 1700’s—France was having much the same conversation around the money supply as in the Anglo-Saxon world. There, the problem was not so much a shortage of coins, but an excess of sovereign debt due to the wild spending sprees of France’s rulers on foreign wars and luxury lifestyles.
Despite being probably the most wealthy and powerful nation in Western Europe, France’s debts (really, the King’s debts) exceeded its assets by quite a bit, at least on paper. The country struggled to raise enough funds via its antiquated and inefficient feudal tax system to pay the interest on its bonds; France’s debt traded in secondary markets as what we might today call junk bonds (i.e. low odds of repayment).
Louis XIV, having lived too long, had died the year before Law’s arrival. The financial condition of the kingdom was appalling: expenditures were twice receipts, the treasury was chronically empty, the farmers-general of the taxes and their horde of subordinate maltôtiers were competent principally in the service of their own rapacity.
The Duc de Saint-Simon, though not always the most reliable counsel, had recently suggested that the straightforward solution was to declare national bankruptcy – repudiate all debt and start again. Philippe, Duc d’Orleans, the Regent for the seven-year-old Louis XV, was largely incapable of thought or action.
Then came Law. Some years earlier, it is said, he had met Philippe in a gambling den. The latter ‘had been impressed with the Scotsman’s financial genius.’ Under a royal edict of 2 May 1716, law, with his brother, was given the right to establish a bank with capital of 6 million livres, about 250,000 English pounds…
(Galbraith, pp. 21-22)
…The creation of the bank proceeded in clear imitation of the already successful Bank of England. Under special license from the French monarch, it was to be a private bank that would help raise and manage money for the public debt. In keeping with his theories on the benefits of paper money, Law immediately began issuing paper notes representing the supposedly guaranteed holdings of the bank in gold coins.
Law’s…bank that took in gold and silver from the public and lent it back out in the form of paper money. The bank also took deposits in the form of government debt, cleverly allowing people to claim the full value of debts that were trading at heavy discounts: if you had a piece of paper saying the king owed you a thousand livres, you could get only, say, four hundred livres in the open market for it, but Law’s bank would credit you with the full thousand livres in paper money. This meant that the bank’s paper assets far outstripped the actual gold it had in store, making it a precursor of the “fractional-reserve banking” that’s normal today. Law’s bank had, by one estimate, about four times as much paper money in circulation as its gold and silver reserves…
The new paper money had an attractive feature: it was guaranteed to trade for a specific weight of silver, and, unlike coins, could not be melted down or devalued. Before long, the banknotes were trading at more than their value in silver, and Law was made Controller General of Finances, in charge of the entire French economy.
It’s also worth noting that banknotes were denominated in the unit of account, unlike coins which typically were not. Coins’ value usually fluctuated against the unit of account (what prices were expressed in), sometimes by the day. What a silver sovereign or gold Louis d’Or was worth on one day might be different that the next, especially since the monarchs liked to devalue the currency in order to decrease the amount of their debts. However, if you brought, say, 10 livres, 18 sous worth of coins to Law’s bank, the paper banknote would be written up for the equivalent amount the coins were worth at that time: 10 livres, 18 sous.
By buying back the government’s debt, Law was able to “retire” it. Thus, the money circulating was ultimately backed by government debt (bonds), just like our money today. Law’s promise to redeem the notes for specie gave users the confidence to use them. Later on, the government will decree the notes of the Banque Generale as the “official” money to be used in payment of taxes and settlement of all debts, legitimizing their value by fiat. Law later attempted to sever the link to gold and silver by demonetizing the latter. He was not successful; paper money was far too novel at the time for people to trust its value in the absence of anything tangible backing it.
Not much of what transpired was that unusual for today, but it was pretty radical for the early 1700s. Had Law stopped at this point, it’s likely that all of this would have been successful, as Galbraith points out:
In these first months, there can be no doubt, John Law had done a useful thing. The financial position of the government was eased. The bank notes loaned to the government and paid out by it for its needs, as well as those loaned to private entrepreneurs, raised prices….[and] the rising prices…brought a substantial business revival.
Law opened branches of his bank in Lyons, La Rochelle, Tours, Amiens and Orleans; presently, in the approximate modern language, he went public. His bank became a publicly chartered company, the Banque Royale.
Had Law stopped at this point, he would be remembered for a modest contribution to the history of banking. The capital in hard cash subscribed by the stockholders would have sufficed to satisfy any holders of notes who sought to have them redeemed. Redemption being assured, not many would have sought it.
It is possible that no man, having made such a promising start, could have stopped…
(Galbraith, pp. 22-23)
Trading government debt for paper money helped lower the government’s debts, but on paper, France’s liabilities still exceeded its assets. But it had one asset that had not yet been monetized—millions of acres of land on the North American continent. So Law set out to monetize that land by turning it into shares in a joint-stock company called the Mississippi Company (Compagnie d’Occident). The Mississippi Company had a monopoly on all trading with the Americas. Buying a share in the company meant a cut of the profits (i.e. equity) of trading with North America.
The first loans and the resulting note issue having been visibly beneficial – and also a source of much personal relief – the Regent proposed and additional issue. If something does good, more must do better. Law acquiesced.
Sensing the need, he also devised a way of replenishing the reserves with which the Banque Royale backed up its growing volume of notes. Here he showed that he had not forgotten his original idea of a land bank.
His idea was to create the Mississippi Company to exploit and bring to France the very large gold deposits which Louisiana was thought to have as subsoil. To the metal so obtained were also to be added the gains of trade. Early in 1719, the Mississippi Company (Compagnie d’Occident), later the Company of the Indies, was gives exclusive trading privileges in India, China and the South Seas. Soon thereafter, as further sources of revenue, it received the tobacco monopoly, the right to coin money and the tax farm. (Galbraith, p. 23)
Law—or the Duc d’Arkansas as he was now known—talked up the corporation so well that the value of the shares skyrocketed—probably the world’s very first stock bubble (but hardly the last). Gambling fever was widespread and contagious, as the desire to get rich by doing nothing is a human universal. The term “millionaire” was coined. Law took advantage of the inflated share price to buy back more of the government’s debt. And the money to buy the shares at the inflated prices was printed by the bank itself. Knowing that there was far more paper than gold and silver to back it in the kingdom, Law then tried to break the link between paper money and specie by demonetizing gold and silver; at one point making it illegal to even hold precious metals.
He was unsuccessful. Paper money was still too new, and people were unwilling to trust it without the backing of previous metal, causing a loss of faith in the currency. Later suspensions of convertibility were done after generations of paper money use. Law’s entire scheme (from origin to collapse) took place over the course of less than a year.
[Law] funded the [Mississippi] company the same way he had funded the bank, with deposits from the public swapped for shares. He then used the value of those shares, which rocketed from five hundred livres to ten thousand livres, to buy up the debts of the French King. The French economy, based on all those rents and annuities and wages, was swept away and replaced by what Law called his “new System of Finance.”
The use of gold and silver was banned. Paper money was now “fiat” currency, underpinned by the authority of the bank and nothing else. At its peak, the company was priced at twice the entire productive capacity of France…that is the highest valuation any company has ever achieved anywhere in the world.
Galbraith and Weatherford summarize the shell game that Law’s “system” ended up becoming:
To simplify slightly, Law was lending notes of the Banque Royale to the government (or to private borrowers) which then passed them on to people in payment of government debts or expenses. These notes were then used by the recipients to buy stock in the Mississippi Company, the proceeds from which went to the government to to pay expenses and pay off creditors who then used the notes to buy more stock, the proceeds from which were used to meet more government expenditures and pay off more public creditors. And so it continued, each cycle being larger than the one before. (Galbraith, p. 24)
The Banque Royale printed paper money, which investors could borrow in order to buy stock in the Mississippi company; the company then used the new notes to pay out its bogus profits. Together the Mississippi Company and the Banque Royale were producing paper profits on each other’s accounts. They bank had soon issued twice as much paper money as there was specie in the whole country; obviously it could no longer guarantee that each paper note would be redeemed in gold. (Weatherford, p. 131)
Such a scheme couldn’t last, of course. Essentially the entire French economy—its central bank, its money supply, its tax system, and the monopoly on land in North America—were in the hands of one single, giant conglomerate run by one man. That meant that when one part of the system failed, all the rest went down like ascending mountain climbers roped together.
Because the central bank owned the Mississippi company, it had an incentive to loan out excess money to drive the share price up—in other words, to inflate a stock bubble based on credit. This is always a bad idea. Finally, Law’s exaggeration of the returns on investments in the Mississippi Company inflated expectations far beyond what was realistic.
The popping of the Mississippi stock bubble, followed by a run on the bank, was enough to bring the whole thing crashing down.
People started to wonder whether these suddenly lucrative investments were worth what they were supposed to be worth; then they started to worry, then to panic, then to demand their money back, then to riot when they couldn’t get it.
Gold and silver were reinstated as money, the company was dissolved, and Law was fired, after a hundred and forty-five days in office. In 1720, he fled the country, ruined. He moved from Brussels to Copenhagen to Venice to London and back to Venice, where he died, broke, in 1729.
As Law must have known, if you gamble big, sometimes you lose big.
Some of the death of the Bank was murder, not suicide. As part of his System, one of Law’s initiatives was to simplify and modernize the inefficient and antiquated French tax system. Taxes were collected by tax farmers (much as in ancient Rome), and Law threatened to overturn their apple cart. He also attempted to end the sale of government offices to the highest bidder. This made him a lot of enemies among the moneyed classes, who thrived on graft and corruption. Such influential people (notably the financiers the Paris brothers), were instrumental in the run on the bank and the subsequent loss of confidence in the money system:
[Law] set about streamlining a tax system riddled with corruption and unnecessary complexity. As one English visitor to France in the late seventeenth century observed. “The people being generally so oppressed with taxes, which increase every day, their estates are worth very little more than what they pay to the King; so that they are, as it were, tenants to the Crown, and at such a rack rent that they find great difficulty to get their own bread.” The mass of offices sold to raise money had caused one of Louis XIV’s ministers to comment, “When it pleases Your Majesty to create an office, God creates a fool to purchase it.” There were officials for inspecting the measuring of cloth and candles; hay trussers; examiners of meat, fish and fowl. There was even an inspector of pigs’ tongues.
This did nothing for efficiency, Law deemed, and served only to make necessities more expensive and to encourage the holders of the offices “to live in idleness and deprive the state of the service they might have done it in some useful profession, had they been obliged to work.” In place of the hundreds of old levies he swept away (over forty in one edict alone), Law introduced a new national taxation system called the denier royal, based on income. The move caused an outcry among the holders of offices, many of whom were wealthy financiers and members of the Parliament, but delight among the public. “The people went dancing and jumping about the streets,” wrote Defoe. “They now pay not one farthing tax for wood, coal, hay, oats, oil, wine, beer, bread, cards, soap, cattle, fish.” (Janet Gleeson, Millionaire; pp. 155-156
…wanted to introduce the logic of capitalism in France, based on providing credit through money creation. Money creation had to be based on expected future wealth, and no longer on the past wealth accumulated in precious metals. (Aglietta, p. 206, emphasis in original)
The danger is, if this wealth fails to materialize; or if people lose the belief that it will materialize, confidence in the system is lost, and failure soon follows.
Although John Law has come down in history as a grifter, and his ideas as fundamentally unsound, many of his ideas eventually became fundamental tenets of modern global finance:
The great irony of Law’s life is that his ideas were, from the modern perspective, largely correct. The ships that went abroad on behalf of his great company began to turn a profit. The auditor who went through the company’s books concluded that it was entirely solvent—which isn’t surprising, when you consider that the lands it owned in America now produce trillions of dollars in economic value.
Today, we live in a version of John Law’s system. Every state in the developed world has a central bank that issues paper money, manipulates the supply of credit in the interest of commerce, uses fractional-reserve banking, and features joint-stock companies that pay dividends. All of these were brought to France, pretty much simultaneously, by John Law.
Law’s efforts left a lingering suspicion of paper money in France. Unfortunately, the revenues problem was not definitively solved. Going back on a specie standard delivered a huge blow to commerce. While England’s paper money system flourished, France stagnated economically. Eventually, the revenues situation of the government became so dire that the King had no choice but to call an Estates General—the extremely rare parliamentary session that kicked off the French Revolution—in 1789.
Once the Mississippi bubble burst, a lot of the capital in France needed some new outlet to invest in. Much of that capital fled across the channel to England, which at the time was inflating a stock bubble of its own:
France’s ruin was England’s gain. Numerous bruised Mississippi shareholders chose to reinvest in English South Sea shares.
The previous month, with a weather eye to developments in France, the South Sea Company managed to beat its rival the Bank of England and secure a second lucrative deal with the government whereby it took over a further $48 million of national debt and launched a new issue of shares. A multitude of English and foreign investors were now descending on London as they had flocked less than a year earlier to Paris “with as much as they can carry and subscribing for or buying shares.”
In Exchange Alley–London’s rue Quincampoix–the sudden surce of new money also bubbled a plethora of alternative companies launched to capitalize on the new fashion for financial fluttering… (Gleeson, p, 200)
Britain chose a different tack – sovereign debt would be monetized and circulate as money. It too utilized the joint-stock company model that had been invented in the previous centuries to enable the Europeans to raise the funds to exploit and colonize the rest of the world. A bank was founded as a chartered company to take in money through subscribed shares and loan out that money to the King. That debt—and not land—would securitize the notes issued by the bank. The notes would then circulate as money, albeit alongside precious metal coins and several other forms of payment. As with the original invention of sovereign debt in northern Italy, it was used to raise the necessary funds for war:
The modern system for dealing with [the] problem [of funding wars] arose in England during the reign of King William, the Protestant Dutch royal who had been imported to the throne of England in 1689, to replace the unacceptably Catholic King James II.
William was a competent ruler, but he had serious baggage—a long-running dispute with King Louis XIV of France. Before long, England and France were involved in a new phase of this dispute, which now seems part of a centuries-long conflict between the two countries, but at the time was variously called the Nine-Years’ War or King William’s War. This war presented the usual problem: how could the nations afford it?
King William’s administration came up with a novel answer: borrow a huge sum of money, and use taxes to pay back the interest over time. In 1694, the English government borrowed 1.2 million pounds at a rate of eight per cent, paid for by taxes on ships’ cargoes, beer, and spirits. In return, the lenders were allowed to incorporate themselves as a new company, the Bank of England. The bank had the right to take in deposits of gold from the public and—a second big innovation—to print “Bank notes” as receipts for the deposits. These new deposits were then lent to the King. The banknotes, being guaranteed by the deposits, were as good as gold money, and rapidly became a generally accepted new currency.
From this point forward, money would be circulating government debt. Plus, it’s value would be based on future revenues, as Aglietta noted above, and not just on the amount of gold and silver coins floating around.
The originality of the Bank of England was that it was not a deposit bank. Unlike for the Bank of Amsterdam, the coverage for the notes issued was very low (3 percent in the beginning). These notes, the counterparty to its loans to the state, replaced bills of exchange and became national and international means of payment for the bank’s customers.
They were not legal tender until 1833. But the securities issued by the bank, bringing interest on the public debt, became legal tender for all payments to the government from 1697 onwards. (Aglietta, pp. 136-137)
Why did the King of England have to borrow at all? Well, for a couple reasons. The power to raise taxes had been taken away from the King and given to Parliament as a consequence of the English Revolution. That revolutionary era also witnessed the inauguration goldsmith banking (such as that undertaken by John Law’s own family of goldsmiths). These goldsmith receipts were the forerunners of the banknote:
The English Civil War…broke out because parliament disputed the king’s right to levy taxes without its consent. The use of goldsmith’s safes as secure places for people’s jewels, bullion and coins increased after the seizure of the mint by Charles I in 1640 and increased again with the outbreak of the Civil War. Consequently some goldsmiths became bankers and development of this aspect of their business continued after the Civil War was over.
Within a few years of the victory by the parliamentary forces, written instructions to goldsmiths to pay money to another customer had developed into the cheque (or check in American spelling). Goldsmiths’ receipts were used not only for withdrawing deposits but also as evidence of ability to pay and by about 1660 these had developed into the banknote.
By this time, control over money had passed into the hands of a rising mercantile class, who—thanks to the staggering wealth produced by globalized trade—possessed more wealth than mere princes and kings, but lacked the ability to write laws or to print money, which they strongly coveted. It was these merchants and “moneyed men” (often members of the Whig party in Parliament) who backed the Dutch staadtholder William of Orange’s claim to the English throne in 1688.
The banknotes began to circulate widely, displacing coins and bills of exchange. And it didn’t stop there: more money was quickly needed, and the Bank acquired more influence. Part of this was due to England being a naval—rather than an army—power. Warships require huge expenditures of capital to build. They also require a vast panoply of resources, such as wood, nails, iron, cloth, stocked provisions, and so forth; whereas land-based armies just require paying soldiers and provisions (which can be commandeered). Thus, financial means to mobilize these resources were much more likely in naval powers such as Holland and England than in continental powers like France, Austria and Spain.
This important post from the WEA Pedagogy blog uses excerpts from Ellen Brown’s Web of Debt to lay out the creation of the Bank of England, and, consequently, central banking in general (and is well-worth reading in full):
William was soon at war with Louis XIV of France. To finance his war, he borrowed 1.2 million pounds in gold from a group of moneylenders, whose names were to be kept secret. The money was raised by a novel device that is still used by governments today: the lenders would issue a permanent loan on which interest would be paid but the principal portion of the loan would not be repaid.
The loan also came with other strings attached. They included:
– The lenders were to be granted a charter to establish a Bank of England, which would issue banknotes that would circulate as the national paper currency.
– The Bank would create banknotes out of nothing, with only a fraction of them backed by coin. Banknotes created and lent to the government would be backed mainly by government I.O.U.s, which would serve as the “reserves” for creating additional loans to private parties.
– Interest of 8 percent would be paid by the government on its loans, marking the birth of the national debt.
– The lenders would be allowed to secure payment on the national debt by direct taxation of the people. Taxes were immediately imposed on a whole range of goods to pay the interest owed to the Bank.
The Bank of England has been called “the Mother of Central Banks.” It was chartered in 1694 to William Paterson, a Scotsman who had previously lived in Amsterdam. A circular distributed to attract subscribers to the Bank’s initial stock offering said, “The Bank hath benefit of interest on all moneys which it, the Bank, creates out of nothing.” The negotiation of additional loans caused England’s national debt to go from 1.2 million pounds in 1694 to 16 million pounds in 1698. By 1815, the debt was up to 885 million pounds, largely due to the compounding of interest. The lenders not only reaped huge profits, but the indebtedness gave them substantial political leverage.
The Bank’s charter gave the force of law to the “fractional reserve” banking scheme that put control of the country’s money in a privately owned company. The Bank of England had the legal right to create paper money out of nothing and lend it to the government at interest. It did this by trading its own paper notes for paper bonds representing the government’s promise to pay principal and interest back to the Bank — the same device used by the U.S. Federal Reserve and other central banks today.
Note that the interest on the loan is paid, but never the loan itself. That meant that tax revenues were increasingly funneled to a small creditor class to whom the government was indebted. Today, we call such people bond holders, and they exercise their leverage over governments through the bond markets. For all intents and purposes, this system ended government sovereignty and tied the hands of even elected governments being able to spend tax money on the domestic needs of their own people. Control over the state’s money was lost forever.
An interesting couple of notes: William Paterson was, like John Law, a Scotsman—giving credence to the claim that it was the Scots who “invented Capitalism” (Adam Smith and James Watt were also Scots). It also raises the idea (to me, anyway) that the modern financial system was started by instinctive hustlers and gamblers. We’ve already referred to John Law’s expertise at the gambling tables of Europe and ability to inspire confidence in his schemes. Patterson, upon returning to Scotland, began raising funds via stock for an ambitious scheme to develop a society in Central America. This scheme ended up being on of the worst disasters in history. Not only that, but the Darien scheme collapsed so badly that Scotland’s entire financial health was devastated, and is considered to be a factor in Scotland signing the Acts of Union, politically joining with England to the south.
The WEA Pedagogy blog than adds some additional details:
Some more detail of interest is that the creation of Bank of England was tremendously beneficial for England. The King, no longer constrained, was able to build up his navy to counter the French. The massive (deficit) spending required for this purpose led to substantial progress in industrialization.
Quoting Wikipedia on this: “As a side effect, the huge industrial effort needed, including establishing ironworks to make more nails and advances in agriculture feeding the quadrupled strength of the navy, started to transform the economy. This helped the new Kingdom of Great Britain – England and Scotland were formally united in 1707 – to become powerful. The power of the navy made Britain the dominant world power in the late 18th and early 19th centuries”
The post then summarizes the history of the creation of central banking:
…It is in this spirit that we offer a “finance drives history” view of the creation of the first Central Bank. The history above can be encapsulated as follows:
1. Queen Elizabeth asserted and acquired the sovereign right to issue money.
2. The moneylenders (the mysterious 0.1% of that time) financed and funded a revolution against the king, acquiring many privileges in the process.
3. Then they financed and funded the restoration of the aristocracy, acquiring even more privileges in the process.
4. Finally, when the King was in desperate straits to raise money, they offered to lend him money at 8% interest, in return for creating the Bank of England, acquiring permanently the privilege of printing money on behalf of the king.
The process by which money was created by the Bank of England is extremely interesting. They acquired the debt of the King. This debt was used as collateral/backing for the money they created. The notes they issued were legal tender in England. Whenever necessary, they were prepared to exchange them for gold, at the prescribed rates. However, when the confidence of the public is high, the need for actual gold as backing is substantially reduced.
As I noted above, the importance of the Navy in the subsequent industrialization of England is often overlooked. There have been a few scholars who have argued that it was Britain’s emphasis on naval power which was a factor in England (and not somewhere else) becoming the epicenter of the Industrial Revolution. Many of its key inventions were sponsored by the government in order to more effectively fight and navigate at sea (from accurate clocks and charts to canned food). Even early mass production was prompted by the needs of the British Navy: pulley blocks were mass-produced by engineers and were one of the first items made this way via mechanization.
Just like in other countries, the needs of war caused the Bank to issue more and more notes, greatly increasing to the national debt. However, the vast profits of industrialization and colonialism were enough to support it. When convertibility was finally temporarily suspended in the mid 1800s by necessity, paper money continued to carry the trust of the public, unlike in France. Galbraith sums up the subsequent history of the Bank of England:
In the fifteen years following the granting of the original charter the government continued in need, and more capital was subscribed by the Bank. In return, it was accorded a monopoly of joint-stock, i.e., corporate, banking under the Crown, one that lasted for nearly a century. In the beginning, the Bank saw itself merely as another, though privileged, banker.
Similarly engaged in a less privileged way were the goldsmiths, who by then had emerged as receivers of deposits and sources of loans and whose operations depended rather more on the strength of their strong boxes than on the rectitude of their transactions. They strongly opposed the renewal of the Bank’s charter. Their objections were overcome, and the charter was renewed.
Soon, however, a new rival appeared to challenge the Bank’s position as banker for the government. This was the South Sea Company. In 1720, after some years of more routine existence, it came forward with a proposal for taking over the government debt in return for various concessions, including, it was hoped, trading privileges to the Spanish colonies, which, though it was little noticed at the time, required a highly improbable treaty with Spain.
The Bank of England bid strenuously against the South Sea Company for the public debt but was completely outdone by the latter’s generosity, as well as by the facilitating bribery by the South Sea Company of Members of Parliament and the government. The rivalry between the two companies did not keep the Bank from being a generous source of loans for the South Sea venture. All in all, it was a narrow escape.
For the enthusiasm following the success of the South Sea Company was extreme. In the same year that Law’s operations were coming to their climax across the Channel, a wild speculation developed in South Sea stock, along with that in numerous other company promotions, including one for a wheel for perpetual motion, one for ‘repairing and rebuilding parsonage and vicarage houses’ and the immortal company ‘for carrying on an undertaking of great advantage, but nobody to know what it is’. All eventually passed into nothing or something very near. In consequence of its largely accidental escape, the reputation of the Bank for prudence was greatly enhanced.
As Frenchmen were left suspicious of banks, Englishmen were left suspicious of joint-stock companies. The Bubble Acts (named for the South Sea bubble) were enacted and for a century or more kept such enterprises under the closest interdict.
From 1720 to 1780, the Bank of England gradually emerged as the guardian of the money supply as well as of the financial concerns of the government of England. Bank of England notes were readily and promptly redeemed in hard coin and, in consequence, were not presented for redemption. The notes of its smaller competitors inspired no such confidence and were regularly cashed in or, on occasion, orphaned. By around 1770, the Bank of England had become nearly the sole source of paper money in London, although the note issues of country banks lasted well into the following century. The private banks became. instead, places of deposit. When they made loans, it was deposits, not note circulation, that expanded, and, as a convenient detail, cheques now came into use. (Galbraith, 32-34)
By a complete accident, Britain was able to escape France’s fate. When the South Sea bubble popped, the Bank of England was able to reliably take up the slack and manage the government’s debt—an option that France did not have, since the central bank and the Company were all part of the same organization, and that organization had a monopoly over loans to the government, tax collection, and money creation.
Despite paper instruments like bills of exchange having existed for centuries, for most ordinary people, money was exclusively the gold and silver coins minted by various national governments. Gold was used for high-value transactions, and silver for smaller ones. When the precious metals from the New World began flowing into Europe, the amount of coins dramatically increased, leading to a continent-wide bout of inflation.
The Spanish, the major beneficiaries of this increased money supply from silver mines of Bolivia and Mexico, used the money to purchase all sorts of things from abroad and live large. Because they became so filthy rich with very little effort (the enslaved Native Americans did all the hard work of digging out the silver), the Spanish failed to develop any domestic industries or innovate much, and thus were passed over by the more industrious Northern Europeans—much like a wealthy, spoiled heir who never learns any practical skills until the money runs out—and by then it’s too late.
There were many in Europe after 1493 who knew only distantly of the discovery and conquest of lands beyond the ocean seas, or to whom this knowledge was not imparted at all. There were few, it can be safely said, who did not feel one of its principal consequences.
Discovery and conquest set in motion a vast flow of precious metal from America to Europe, and the result was huge rise in prices – an inflation occasioned by an increase in the supply of the hardest of hard money.
Almost no one in Europe was so removed from market influences that he did not feel some consequence in his wage, in what he sold, in whatever trifling thing he had to buy.
The price increases occurred first in Spain where the metal first arrived; then, as they were carried by trade (or perhaps in lesser measure by smuggling or for conquest) to France, the Low Countries and England, inflation followed there.
In Andalusia, between 1500 and 1600, prices rose perhaps fivefold. In England, if prices during the last half of the fifteenth century, i.e. before Columbus, are taken as 100, by the last decade of the sixteenth century they were roughly at 250; eighty years later, by the decade of 1673 through 1682, they were around 350, up by three-and-a-half times from the level before Columbus, Cortez and the Pizarros. After 1680, they levelled off an subsided, as much earlier they had fallen in Spain. (Galbraith, pp. 8-9)
Prior to this era, Europe had dealt with ongoing, chronic shortages of precious metals for coins, because much of the continent’s silver leaked out through trading with the Arab world, especially after the Crusades. This is why much of the European economy remained unmonetized for so long. In fact, northern Italian bankers had invented banking and bills of exchange specifically to deal with this problem. Thus, markets in Europe remained confined to specific market towns and “ports of trade” and were subject to strict regulations by rulers. It was not a lack of desire for profits on the part of rulers, but a lack of coins that kept capitalism in embryo.
The vast increase in the money supply from New World silver and gold is what made capitalism possible in Western Europe, but that’s a story for another time.
At its peak in the early 17th century, 160,000 native Peruvians, slaves from Africa and Spanish settlers lived in Potosí to work the mines around the city: a population larger than London, Milan or Seville at the time. In the rush to exploit the silver, the first Spanish colonisers occupied the locals’ homes, forgoing the typical colonial urban grid and constructing makeshift accommodation that evolved into a chaotic mismatch of extravagant villas and modest huts, punctuated by gambling houses, theatres, workshops and churches.
High in the dusty red mountains, the city was surrounded by 22 dams powering 140 mills that ground the silver ore before it was moulded into bars and sent to the first Spanish colonial mint in the Americas. The wealth attracted artists, academics, priests, prostitutes and traders, enticed by the Altiplano’s icy mysticism. “I am rich Potosí, treasure of the world, king of all mountains and envy of kings” read the city’s coat of arms, and the pieces of eight that flowed from it helped make Spain the global superpower of the period.
This price spike led to an important realization that people started to have after prices finally leveled off in the late 1600’s: the number of economic transactions (and hence the overall size of the economy and the capacity to specialize) was dependent on the amount of money in circulation. In other words, the volume of trade is determined by the amount of currency in circulation.
Today this is known as the quantity theory of money.
This newfound abundance of silver in Europe caused rising prices–the so-called “Price Revolution”. For the first time there was enough money to create a new class of people whose wealth consisted primarily of money as oppose to land: moneyed men, or the merchant caste. It also caused Spanish coins to be widely used and distributed, function as the world’s first global currency from the Americas to the Middle East to Asia:
The silver of the America made possible a world economy for the first time, as much of it was traded not only to the Ottomans but to the Chinese and East Indians as well, bringing all of them under the influence of the new silver supplies and standardized silver values. Europe’s prosperity boomed, and its people wanted all the teas, silks, cottons, coffees, and spices which the rest of the world had to offer. Asia received much of this silver, but it too experienced the silver inflation that Europe underwent. In China, silver had one-fouth the value of gold in 1368, before the discover of America, but by 1737 the ratio plummeted to twenty to one, a decline of silver to one-fifth of its former value. This flood of American silver came to Asia directly from Acapulco across the pacific via Manila in the Philippines, whence it was traded to China for spices and porcelain. (Weatherford, Indian Givers, pp. 16-17)
The so-called “Price Revolution” taught Europeans another important lesson: What constituted money didn’t change, but it’s purchasing power did. Therefore, they concluded, the value of money depended on how much of it there was in circulation, and not on some intrinsic quality. If there was a shortage of cash, it was worth a lot (i.e. it had high purchasing power). If there was a surplus, it wasn’t worth nearly as much (i.e. it had lower purchasing power). They had seen this first-hand.
In other words, the value of money had to do with how much of it there was, more than any intrinsic, magical quality. The value attributed gold and silver was merely a cultural artifact.
In fact, money had to be useless, since if it were more useful as a commodity than as money, then that’s what it would be used for, and there would be perennial shortages of currency causing the economy to contract.
This led to the following conclusions: If money has no inherent value, but was merely an expedient for spot transactions, than why not paper? But it does have to be backed by something, otherwise people will lose confidence in it. Although precious metal coins could be devalued by government edicts, their worth could never fall to zero, since there was always a commodity market for gold and silver for things like jewelry and tea sets. Precious metals tended to flow from where they were undervalued to countries where the commodity price was higher, causing perennial spot shortages throughout Europe, along with the requisite economic chaos.
The basic problem people were struggling with was that, since all money at the time was dependent on precious metals, how could you increase the supply of money without stumbling upon new sources of precious metal, as the Spanish had done? The money in circulation had to be increased—that was obvious to a growing number of people. But the low-hanging fruit of gold and silver had already been harvested. And with vast new material wealth continuing to flow into Europe from the Americas, how could the money supply be increased enough to take advantage of this?
Paper was an obvious solution. Paper had come to Europe in the Middle Ages from China. After the Black Death, many of the cotton clothes worn by the deceased were turned into pulp, which helped spread the use of paper, and indirectly drive the commercial revolution of the Middle Ages, along with innovations like Arabic numerals and double-entry bookkeeping (aka the “Venetian method”). The printing press, invented in Mainz in 1502 by Gutenberg, further enhanced the power of paper printing. But the real use of paper was in banking:
In the West, paper found its most important use as a means of keeping ledgers in banks. Long before it was used as a means of printing more money, it was used by bankers to increase the money supply. Only later did it gradually emerge as a replacment for coins in daily commerce. The initial development and circulation of monetary bills of paper came about as a side effect of banking. (Weatherford, p. 128)
Paper instruments of credit were already widely circulating throughout Europe, such as Bills of Exchange. Yet, underneath it all, money was still ultimately tied to finite amounts of precious metal. Paper checks were simply transfers of monies from one account to another, similar to giro banking in the ancient world, while Bills of exchange were:
“…essentially a written order to pay a fixed sum of money at a future date. Bills of exchange were originally designed as short-term contracts but gradually became heavily used for long-term borrowing. They were typically rolled over and became de facto short-term loans to finance longer-term projects…bills of exchange could be re-sold, with each seller serving as a signatory to the bill and, by implication, insuring the buyer of the bill against default…”
One solution was just to issue credit in excess of the amount of gold and silver stored in your vaults—the so-called “goldsmith’s trick.” This became especially common around the time of he English Revolution, where goldsmiths acted as moneylenders and bankers. As long as there was enough gold and silver sitting in the vault to cover the amount people showing up to exchange their paper, you were all right. But if more paper was redeemed than the gold and silver you had at any one point, you were doomed. This is why governments were reluctant to embrace such a solution (later, this idea would underpin fractional reserve banking).
The question ultimately boiled down to, if not gold and silver, then what would give paper money its value? And what would limit its supply? Otherwise, any enterprising printer could just print up money in any amount and give it to himself. Ultimately, the answers would come down to some sort of government authority to regulate the issuance of such bills, and back it up with the government’s credit.
One very common idea floating around in the late 1600s and early 1700s were proposals for a land bank–essentially monetizing land. Such banks wouldn’t take deposits in gold or silver; Rather, they would issue government-backed paper money securitized by mortgages on land. “In these early cases the term “bank” meant simply the collection or batch of bills of credit issued for a temporary period. If successful, reissues would lead to a permanent institution or bank in the more modern sense of the term.” After all, even if a country didn’t have gold and silver mines, it did always have land. Land was valuable, and inherently limited in supply–even moreso than gold and silver (“Buy land – they aren’t making any more if it,” said Mark Twain). This was a variant of the idea of paper money as a claim on real resources. However, the problem was much the same as with the goldsmith’s trick: what happens if you print money in excess of the underlying resources?
[I]f we look at the world through the lens of the late 17th century…[m]oney was made of metal, and there was therefore no scope for creating more money without finding new supplies of silver and gold. There were two types of wealthy individual: moneyed men and landed men.
The land bank proponents were early contributors to the economic debate. In their pamphlets the principal problem that they identified was the sluggish economy. They all agreed that the situation could be improved and saw the best means of improvement as an increase in the supply of money.
Rather than doing this as the Spanish and Portuguese did by sailing to the new world and bringing back vast quantities of precious metals, they proposed using the banking model that had succeeded in Amsterdam and Venice. According to Schumpeter, they “fully realised the business potentialities of the discovery that money – and hence capital in the monetary sense of the term – can be manufactured or created”.
Britain, which was not rich in terms of gold and silver, had plenty of potential in its land. Therefore, a land bank appeared to be a sensible suggestion. None of the land banks that were set up succeeded…
Land banks had already been established in the American Colonies in a limited fashion:
In 1686, Massachusetts established the first American land bank. Others soon followed.
Despite the name, these were not true banks; they did not accept deposits. Instead, they issued “banks” or notes, or “bills on loan,” to borrowers who put up land as collateral with the bank.
To fortify confidence in the notes, colonial governments promised to issue only a fixed amount of notes and for a set term and to secure their loans with collateral typically equal to twice the amount of the loan.
These notes soon became legal tender for all public and private debts. Principal and interest payments were due annually, but the bank often delayed the first principal payment for a few years. Payments had to be made in notes or in specie.
While the notes furnished a circulating currency, the interest payments provided a revenue stream to the colonial governments.
National land banks were proposed in the early 1700’s by two people who would become very influential in the history of paper money: John Law (for France) and Benjamin Franklin (for Pennsylvania). Later on, this idea would be used by the revolutionary French government to back its own paper currency called assignats. They used the land seized from the Catholic Church and some aristocrats to back the money. And there was a lot of this land—the Church owned an estimated one-fifth of all the land in France prior to the Revolution.
We can think of this as the very earliest rumblings of today’s Modern Monetary Theory (MMT). Money wasn’t gold and silver after all—rather, it was any means of exchange by which trade was conducted. The medium could be anything, so long it retained its value in exchange. What really mattered was the supply of it: that it was somewhat commensurate with the amount of economic transactions desired. The Scotsman John Law, who would establish the first paper money system in France, had seen people at the gambling tables of England using bills of exchange, stocks, bonds, banknotes, IOUs—any sort of valuable paper instrument—as de facto money in a pinch. This gave him the essential insight that any paper people believed had intrinsic value could be used as money, not just gold and silver coins:
[John] Law thought that the important thing about money wasn’t its inherent value; he didn’t believe it had any. “Money is not the value for which goods are exchanged, but the value by which they are exchanged,” he wrote. That is, money is the means by which you swap one set of stuff for another set of stuff. The crucial thing, Law thought, was to get money moving around the economy and to use it to stimulate trade and business.
As Buchan writes, “Money must be turned to the service of trade, and lie at the discretion of the prince or parliament to vary according to the needs of trade. Such an idea, orthodox and even tedious for the past fifty years, was thought in the seventeenth century to be diabolical.”
What was undeniable was that the growing economies of the North Atlantic needed more money, and lots of it; far in excess of what any gold and silver mines anywhere in the world could reasonably provide.
When it comes to paper money in the West, the foremost innovator was the United States, as John Kenneth Galbraith points out:
If the history of commercial banking belongs to the Italians and of central banking to the British, that of paper money issued by a central government belongs indubitably to the Americans. (Galbraith, p. 45)
The reason the American colonies had to experiment with paper money was simple: “official” money in the American Colonies was gold and silver coins, and there was a perennial shortage of such coins.
The American colonies had no rich deposits of gold of silver, unlike the Spanish in Latin America. There were no mines, and, to make things worse, there no mints allowed in North America. And, to top it all off, the British government forbade the colonies from chartering banks, “Thus bank notes, the obvious alternative to government notes, were excluded.” (Galbraith, p. 47). Colonists used whatever coins they could get their hands on, most of which came from the Spanish colonies to the south. In particular, this meant the Spanish Peso de Ocho Reales, or Piece of Eight: the world’s first global currency. This was also the origin of the famed dollar $ign. Foreign coins would continue to circulate as money in the United States until after the Civil War.
Since the colonies couldn’t mint their own coins, if you wanted to get your hands on gold and silver coins, you had no other choice but to trade with the outside world. If you didn’t trade with the outside world, then getting sufficient coins was really difficult, severely limiting internal trade. This wasn’t accidental—the British, like all colonial powers, wanted the colonies to be sources of raw materials for their domestic manufacturing industries, and not to be economically self-sufficient.
To help alleviate the ongoing shortage of previous metal coins, local authorities might have passed laws to restrict the export of gold and silver–what we would today call capital controls—but such laws were expressly forbidden by the British government. In the mercantilist world of the 1600-1700s, the strength of a nation lay in the amount of gold and silver stashed away in its vaults—probably a holdover from the time when gold and silver paid for mercenaries in Europe before the era of professional standing armies.
And so there was a perennial, ongoing shortage of currency for transactions. This was an anchor around the leg of the domestic economy of the colonies.
…the British colonies in North America suffered from a constant shortage of all coins. The mercantile policies then in vogue in London sought to increase the amount of gold and silver money in Britain and to do whatever was practical in order to prohibit its export, even to its own colonies.
Beginning in 1695, Britain forbade the export of specie to anywhere in the world, including to its own colonies. As a result, the American colonies were forced to use foreign silver coins rather than British pounds, shillings, and pence, and they found the greatest supply of coins in the neighboring Spanish colony of Mexico, which operated one of the world’s largest mints.
Because of the great wealth produced in Mexico and Peru, Spanish coins became the most commonly accepted currency in the world…The most common Spanish coin in use in the British colonies in 1776 was the pillar dollar, so named because the obverse side showed the Eastern and Western hemispheres with a large column on either side.
In Spanish imperial iconography, the columns represented the Pillars of Hercules, or the narrow strait separating Spain from Morocco and connecting the Mediterranean with the Atlantic. A banner hanging from the columns bore the words plus ultra, meaning “more beyond.” The Spanish authorities began issuing this coin almost as soon as they opened the mint in Mexico with the intent of publicizing the discovery or America, which was the plus ultra, the land out beyond the Pillars of Hercules.
Some people say that the modern dollar sign is derived from this pillar dollar. According to this explanation, the two parallel lines represent the columns and the S stands for the shape of the banner hanging from them. Whether the sign was inspired by this coin or not, the pillar dollar can certainly be called the first American silver dollar. (Weatherford, pp. 117-118)
Another thing the colonists did to get around this chronic shortage of metal coins was barter, which led to settling accounts with all sorts of things other than previous metal coins. They might settle accounts, for example, with so-called “county pay” or “country money,” typically cash crops: cod, tobacco, rice, grain, cattle, indigo, whiskey, brandy–whatever was at hand. During 1775 in North Carolina as many as seventeen different forms of money were declared to be legal tender.
Without the convenience of money, colonists resorted to many less-efficient methods of trading. Barter, of course, was common, particularly in rural areas, but individuals often had to accept goods that they did not particularly need or want only because they had no other way to complete a transaction. They accepted these goods hoping to pass them on in future trades. Some items, most famously tobacco in Virginia and Maryland, worked well in this way and became commodity monies directly or as backing for warehouse receipts. Various other types of warehouse receipts, bills of exchange against deposits in London, and individuals’ promissory notes might also circulate as money. In addition, shopkeepers and employers sometimes issued “shop notes,” a type of scrip—often in small denominations—redeemable at a specific store.
Out of necessity, merchants and wealthy individuals frequently extended credit to others. In an economy that depended heavily on barter, however, one could end up holding debts against many individuals and across a broad array of goods. People naturally hoped to net out some of these debts, but this is extremely difficult under barter. Fortunately, colonial creditors could tally debts in British pounds or colonial currencies even if these currencies were not readily available. In this way, money acted as a unit of account. By attaching a value to things, money accommodated the netting out of debts.
Paper Money and Inflation in Colonial America (Cleveland Fed) One of the most popular substitutes in North America could be obtained domestically: beads made from marine sea shells called wampum, which were used extensively in the tribute economy of the the Iroquois nations. Wampum is a member of the huge amount of currencies all over the globe that were made from sea shells, including cowrie shells and dentalium. Since these were regarded as valuable by Native American tribes, they had the added advantage of being able to be traded for animal pelts bagged by the Native Americans (who soon stripped the forest bare in order to get more wampum—and hence more prestige). In 1664 Pieter Stuyvesant arranged a loan in wampum worth over 5,000 guilders for paying the wages of workers constructing the New York citadel. They were even subject to a form of counterfeiting:
The first substitute was taken over from the the Indians. From New England to Virginia in the first years of settlement, the wampum or shells used by the Indians became the accepted small coinage. In Massachusetts in 1641, it was made legal tender, subject to some limits as to the size of the transaction, at the rate of six shells to the penny.
However, within a generation or two it began to lose favor. The shells came in two denominations, black and white, the first being double the value of the second. It required by small skill and a smaller amount of dye to convert the lower denomination of currency into the higher.
Also, the acceptability of wampum depended on its being redeemed by the Indians in pelts. The Indians, in effect, were the central bankers for the wampum monetary system, and beaver pelts were the reserve currency into which the wampum could be converted. This convertibility sustained the purchasing power of the shells.
As the seventeenth century passed and settlement expanded, the beavers receded to the evermore distant forests and streams. Pelts ceased to be available; wampum ceased, accordingly, to be convertible and thus, in line with expectation, it lost in purchasing power. Soon it disappeared from circulation except as small change. (Galbraith, pp. 47-48)
Another very popular domestic currency in use was tobacco leaf. In fact, tobacco’s reign as currency in America lasted longer than gold’s:
Tobacco, although regionally more restricted, was far more important than wampum. It came into use as money in Virginia a dozen years after the first permanent settlement in Jamestown in 1607. Twenty-three years later, in 1642, it was made legal tender by the General Assembly of the colony by the interestingly inverse device of outlawing payments that called for payment in gold or silver.
The use of tobacco money survived in Virginia for nearly two centuries and in Maryland for a century and a half – in both cases until the Constitution made money solely the concern of the Federal government. The gold standard, by the common calculation, lasted from 1879 until the cancellation of the final attenuated version by Richard Nixon in 1971. Viewing the whole span of American history, tobacco, though more confined as to region, had nearly twice as long a run as gold. (Galbraith, p. 48)
And such practices might be where Adam Smith came up with his erroneous notion of primitive barter economies, which continues to plague economics and economic history to this day.
This illustrates another dictum about money: barter tends to occur in fully monetized market economies where the medium of exchange is in short supply. This is because internal exchanges in market economies take the form spot transactions among anonymous competing strangers. Anthropologists now know that pre-monetary economies were embedded in social relations and took the forms of reciprocity, redistribution, householding, and ceremonial exchange, rather than constant efforts to “truck, barter and exchange.” Anthropologists have never found an example of a barter economy anywhere in the world (e.g. “I’ll give you ten chickens for that cow”).
People in North America and other remote regions were using things like cod, tobacco, grain, brandy, and shells to settle accounts, sure—but these were fully monetized economies that just happened to have a chronic shortage of coins! To get around this, certain items which were particularly valuable because they could be traded with the outside world—like cod in Newfoundland, or tobacco in Virginia, were used to settle accounts. Or, because some items were particularly valuable inside the community, they could be used in subsequent trades as a medium of exchange (like iron nails in Scotland, another Smith example). One might include the “cigarette money” used in prisons in this category. A contemporary example is the use of spruce tips in remote Alaskan towns: spruce tips can only be harvested during a few weeks in the spring and are used in all sorts of exported products (beer, tea, soap, etc.) that are traded with the outside world.
A year after moving to Skagway, Alaska, John Sasfai walked into Skagway Brewing Co. and ordered the signature Spruce Tip Blonde Ale. But instead of pulling out his wallet, the guide for Klondike Tours put a sack of spruce tips on the bar to pay his tab. That’s because in this town, the bounty he foraged from trees near Klondike Gold Rush National Historical Park serves as a currency.
This village, with a year-round population just shy of 1,000, is notably remote – it’s about 100 miles north of Juneau and 800 miles south-east of Anchorage by car. And though stampeders established Skagway during the late-19th-Century gold rush, these days the nuggets of value are plucked from the forest, not panned or mined. While spruce tips – the buds that develop on the ends of spruce tree branches – are only good for cash at Skagway Brewing Co., bartering with spruce tips for food, firewood or coffee (which are delivered by barge once a week) is not uncommon.
However, in all of Smith’s cases, prices were denominated in standard units of account, but people settled their debts in whatever was at hand. But none of these things were the origin of prices and money, as Smith incorrectly claimed.
To start, with Adam Smith’s error as to the two most generally quoted instances of the use of commodities as money in modern times, namely that of nails in a Scotch village and that of dried cod in Newfoundland, have already been exposed [as fraudulent] … and it is curious how, in the face of the evidently correct explanation … Adam Smith’s mistake has been perpetuated.
In the Scotch village the dealers sold materials and food to the nail makers, and bought from them the finished nails the value of which was charged off against the debt. The use of money was as well known to the fishers who frequented the coasts and banks of Newfoundland as it is to us, but no metal currency was used simply because it was not wanted.
In the early days of the Newfoundland fishing industry there was no permanent European population; the fishers went there for the fishing season only, and those who were not fishers were traders who bought the dried fish and sold to the fishers their daily supplies. The latter sold their catch to the traders at the market price in pounds, shillings and pence, and obtained in return a credit on their books, with which they paid for their supplies. Balances due by the traders were paid for by drafts on England or France.
A moment’s reflection shows that a staple commodity could not be used as money, because ex hypothesi, the medium of exchange is equally receivable by all members of the community. Thus if the fishers paid for their supplies in cod, the traders would equally have to pay for their cod in cod, an obvious absurdity. In both these instances in which Adam Smith believes that he has discovered a tangible currency, he has, in fact, merely found—credit.
Then again as regards the various colonial laws, making corn, tobacco, etc., receivable in payment of debt and taxes, these commodities were never a medium of exchange in the economic sense of a commodity, in terms of which the value of all other things is measured. They were to be taken at their market price in money. Nor is there, as far as I know, any warrant for the assumption usually made that the commodities thus made receivable were a general medium of exchange in any sense of the words. The laws merely put into the hands of debtors a method of liberating themselves in case of necessity, in the absence of other more usual means. But it is not to be supposed that such a necessity was of frequent occurrence, except, perhaps in country districts far from a town and without easy means of communication.
All of this experience showed colonists that multiple things could be used as money, if needed. There was no more magic to a gold standard, then to a cowrie standard, or a tobacco standard, a grain standard, or a cattle standard, or anything else for that matter. This would prove to be an instrumental lesson in the creation of paper money in the colonies.
Galbraith, for his part, gives an alternative explanation for the chronic lack of precious metals in the American colonies:
Many countries or communities had gold and silver in comparative abundance without mines. Venice, Genoa, Bruges had no Mother Lode (Nor today does Hong Kong or Singapore.) While the colonists were required to pay in hard coin for what they brought from Britain, they also had products – tobacco, pelts, ships, shipping services – for which British merchants would have been willing, and were quite free, to expend gold and silver.
Much more plausibly, the shortage of hard money in the colonies was another manifestation of Gresham. From the very beginning the colonists experimented with substitutes for metal. The substitutes, being less well regarded than gold or silver, were passed on to others and this were kept in circulation. The good gold or silver was kept by those receiving it or used for those purchases, including those in the mother country, for which the substitutes were unacceptable. (p. 47)
So the colonists were forced by economic necessity to experiment with paper money, and that’s why the United States is the cradle of rolling out this innovation. As Galbraith notes of the above cases, “None of these substitutes was important as compared with paper money.” (Galbraith p. 51).
Where did paper money come from? That’s the question behind this article from The New Yorker: The Invention of Money. It’s a review of recent biographies of John Law and Walter Bagehot. The author concludes:
The present moment in financial invention therefore has some similarities with the period when money in the form we currently understand it—a paper currency backed by state guarantees—was first created. The hero of that origin story is the nation-state. In all good stories, the hero wants something but faces an obstacle. In the case of the nation-state, what it wants to do is wage war, and the obstacle it faces is how to pay for it.
At the same time, I’ve been reading a few popular books on monetary history. One is Jack Weatherford’s The History of Money. Weatherford, best known for his books about Genghis Khan, is eminently readable, and hits most of the major developments. However, he is clearly in the Ron Paul school of economics: gold alone is money, governments are profligate and can’t be trusted, free banking is good, central banks are bad, etc. There are also a number of basic factual errors in the book, which leads me to recommend it only if you take it as a brief survey that gets many things wrong and is a bit outdated.
Weatherford’s major reference for his chapter on paper money is John Kenneth Galbraith’s: Money, Whence It Came and Where It Went. So I decided to go directly to the source. Galbraith, a lauded economist, has a view that is much more authoritative and nuanced than Weatherford’s. Galbraith’s book concentrates mainly on the origins of banking and the modern money system, and not so much on the deep history of money in the ancient world or the Medieval period.
I’d like to take these (and others) and give an account of how the money system works today. While Modern Monetary theory is a good descriptor of how money works in nation states in the present, it often doesn’t describe how that system initially came about, and what makes it so radically different from how the money system functioned in ancient economies.
But first, I’d like to say a few brief words on why any of this matters.
Like it or not, money runs the world. If you want to understand how the world works—and how to change it—it’s important to know how the systems comprising it work. Money may seem like a boring topic (sorry!), but I would argue that no knowledge is more fundamental and useful for trying to make things marginally better. I can’t tell you how many people I’ve met who call themselves “Socially liberal but fiscally conservative.” And what do they mean by “fiscally conservative?” Nine times out of ten, it’s this: money is inherently scarce; debt is evil; and government budgets should be balanced down to the penny. You also have libertarian Bitcoin cranks, who are convinced that algorithms will save mankind once the state somehow withers away. These views are extraordinarily resistant to any kind of challenge, almost as if they were a de facto religion (in fact, they are probably even more resistant to rational analysis that most people’s religious faith!) Such people would be amenable to a more progessive message if not for the universal brainwashing about what money is, and what it does. History can provide a useful guide.
China’s False Start
All paper money all rests on the same fundamental basis: they are circulating IOU’s. The name of the creditor backing them and what’s used to securitize them changes over time, however. Sometimes it’s a particularly reputable member of the community. Sometimes it’s a king or other ruler. Sometimes it’s a democratically-elected government–or more precisely, the future anticipated revenues of that government. Sometimes it’s backed by something tangible, like silver, gold, or real estate (the most common options). Sometimes it’s not. Nowadays, sovereign money is usually backed by the government’s ability to redistribute and to impose binding liabilities on its citizens (and, by extension, it’s monopoly on the use of legitimate force).
Paper money began where papermaking began: in China. The usual sources were hemp and mulberry bark, and printing blocks were made of wood or metal. Because of China’s strong imperial state structure, centrality, and geographic reach, it could command officially stamped pieces of paper to be accepted by its citizens as currency in lieu of precious metals. The story is told in this excellent podcast by Tim Harford on the origins of paper money: Paper Money (50 Things that Made the Modern Economy)
In Harford’s telling, paper money begins in Sichuan province, where iron coins were used rather than gold and silver in order to keep specie from leaking out of China to the hostile territories surrounding China, such as those of the Jurchen. Iron coins had holes in the middle and were carried around on cords, called cash. The problem, as you might expect, was that these strings of heavy iron coins were extremely cumbersome. You would be turning over larger weights of coins that the weight of the things you were trying to buy: 10 pounds of coins for a five pound chicken, or something like that.
Sichuan’s iron currency suffered from serious deficiencies. The low intrinsic value of iron coins, worth no more than a tenth of the equivalent amount of bronze coin, imposed a great burden on merchants who needed to convey their purchasing capital from one place to another, and on ordinary consumers as well. A housewife would have to bring a pound and a half of iron coin to the marketplace to buy a pound of salt, and a merchant from the capital would receive ninety-one and a quarter pounds of iron coin in exchange for an ounce of silver.
Of course, the inconvenience of transporting low-value coin affected bronze currency as well. In the early ninth century, the Tang government created depositories at its capital of Chang’an where merchants could deposit bronze coin in return for promissory notes (known as feiqian, or “flying cash”) that could be redeemed in provincial capitals. “Flying cash” was especially popular among tea merchants who wished to return their profits from the sale of tea in the capital to the distant tea-growing areas of southeastern China. The Song dynasty continued this practice under the rubric of “convenient cash” (bianqian), accepting payments of gold, silver, coin, or sil in return for notes denominated in bronze coin. (The Origins of Value, pp. 67-68)
In the mid-990s, Sichuan was captured by rebels (partly angered by depreciating currency), who shut down the mint. It remained shut even after the government regained control of the province. This prompted some private merchants to issue their own paper bills to compensate for the acute shortage of coins. Such bills represented debt—the debt of the private merchant, of course. These bills soon began to circulate, and people began using them in place of iron coins, as Harford describes:
Instead of carrying around a wagonload of iron coins, a well-known and trusted merchant would write an IOU, and promise to pay his bill later when it was more convenient for everyone.
That was a simple enough idea. But then there was a twist, a kind of economic magic. These “jiaozi”, or IOUs, started to trade freely. Suppose I supply some goods to the eminently reputable Mr Zhang, and he gives me an IOU. When I go to your shop later, rather than paying you with iron coins – who does that? – I could write you an IOU.
But it might be simpler – and indeed you might prefer it – if instead I give you Mr Zhang’s IOU. After all, we both know he’s good for the money. Now you, and I, and Mr Zhang, have together created a kind of primitive paper money – it’s a promise to repay that has a marketable value of its own – and can be passed around from person to person without being redeemed.
This is very good news for Mr Zhang, because as long as people keep finding it convenient simply to pass on his IOU as a way of paying for things, Mr Zhang never actually has to stump up the iron coins. Effectively, he enjoys an interest-free loan for as long as his IOU continues to circulate. Better still, it’s a loan that he may never be asked to repay.
No wonder the Chinese authorities started to think these benefits ought to accrue to them, rather than to the likes of Mr Zhang. At first they regulated the issuance of jiaozi, but then outlawed private jiaozi and took over the whole business themselves. The official jiaozi currency was a huge hit, circulating across regions and even internationally. In fact, the jiaozi even traded at a premium, because they were so much easier to carry around than metal coins.
Over the next ten years, these “exchange bills” became important in China’s intraregional trade, but the problem of bogus private bills issued by unscrupulous traders remained an ongoing problem for government officials. There were growing calls for government to get more involved in the circulation of bills. Enter the new prefect of Chengdu, one Zhang Yong. He issued a series of government reforms to address this problem in 1005. He:
1.) reopened Sichuan’s mints and introduced a new large iron coin that was equivalent to ten small iron coins, or two bronze coins;
2.) restricted the right to issue exchange bills to a consortium of sixteen merchant houses in Chengdu that were known to have sufficient financial resources to back the bills up, and;
3.) standardized the bills by mandating that they be issued in a specified size, color and format, using government-supplied labor and materials (although merchants could add their own watermark).
There were no standard denominations; rather, the merchants ascribed the value of the note in ink as needed. A three percent fee was charged for cashing in the notes. There was no limit on the number of bills issued. The amount of bills in circulation tended to vary with the seasons: more bills were issued in the early summer when new silk reached the market and in the fall during the rice harvest.
There were still problems with the paper currency, however, such as counterfeiting and overissuance of bills without sufficient backing. In 1024 under a new governor, Xue Tian, the government took over the issuance of jiaozi. A state-run Jiaozi Currency Bureau was established in Chengdu and given exclusive rights to issue jiaozi. The bills had the same format, but were issued in fixed denominations: one and ten guan. Most significantly, the bills had an expiration date of two years, exchangeable for fresh ones, giving the government a modicum of control over the amount issued and preventing the counterfeiting of worn or outdated bills. Also, quotas were established for the issue of the currency. Tea merchants engaged in intraregional and international trade were the most enthusiastic users of the currency, as it eliminated the need to transport heavy coins and prevented robbery by bandits (note that the needs of traveling merchants were also instrumental in the creation of Bills of Exchange issued by banks in medieval Europe centuries later).
Yet there were still problems. The government issued notes to procure military supplies from the merchants; and the ongoing costs of wars on the frontier led to their overissue. Plus, a new emperor nationalized the tea industry, meaning that the major consumers of jiaozi—the tea merchants—no longer had as much use for them. This loss of demand alongside oversupply caused a sharp depreciation in the value of the currency in the market. Instead of trading at a ten percent premium, the bills were now accepted at a ten percent discount. In 1107 the government issued a new paper currency—the qianyin—at a rate of 1:4 to the old, depreciating the earlier jiaozi bills in effort to reduce the supply.
The rest of the history of China’s bills is basically a cycle of the same thing: issuing new bills, overspending due to military needs on the frontier, rampant counterfeiting, bills depreciating, demonetizing old notes, new dynasties issuing new bills, etc. Bills were still in use in trade when Marco Polo vistied China. This is the description from the fourteenth century by the Arab Traveller ibn Battuta:
The Chinese use neither [gold] dinars nor [silver] dirhams in their commerce. All the gold and silver that comes into their country is cast by them into ingots, as we have described. Their buying and selling is carried on exclusively by means of pieces of paper, each of the size of the palm of the hand, and stamped with the sultan’s seal. Twenty-five of these pieces of paper are called a balisht, which takes the place of the dinar with us [as the unit of currency]
This demonstrates some of the essential dictums of Modern Monetary Theory.
The first is Hyman Minsky’s dictum: Anyone can create money, the secret is in getting it accepted.
The second is Felix Martin’s definition of money: Money is tradeable debt.
The other is the observation that that: The credit that is bears highest reputation is typically that of the sovereign. Gresham’s Law being what it is, this usually means that sovereign’s money will drive out all competitors, as we’ll see much later in the United States during the Civil War.
As a reminder, Gresham’s Law is this: Bad money drives out good, or perhaps, more accurately, people spend “lesser” money if they can, and hoard “greater” money for themselves.
Gresham’s Law…is perhaps the only economic law that has never been challenged, and for the reason that there has never been a serious exception. Human nature may be an infinitely variant thing. But it has it’s constants. One is that, given a choice, people keep what is best for themselves, i.e. for those whom they love the most. (Galbraith, p. 8)
A similar rationale led to the establishment of banks and banking in Northern Europe during the Age of Sail. You deposited coins and got a receipt for the amount of coins stashed in the vault. These receipts could be used to pay for things, with the value equivalent to the coins traded (in fact, the notes were more valuable, since they couldn’t be melted down or devalued).
A final interesting note: overissuance of paper currencies and lavish spending by the Yongle emperor Zhu Di (on wars, but also notably on the Chinese treasure ship voyages) led to China going back onto a silver standard just in time for the European discovery and conquest of the New World. The Chinese demand for silver is what fueled the European trade with the Far East, since the Europeans had nothing else that the Chinese wanted to exchange for goods like silks and porcelain. Without that silver standard, who knows what would have happened?
The sizable deficits incurred by Yongle’s costly foreign expeditions, including the famous maritime explorations of Admiral Zheng He and his fleet, and the emperor’s decision to relocate the Ming capital from Nanjing to Beijing were abated, albeit temporarily, by printing more money. Finally, in the 1430s, the Ming yielded to economic realities, abandoning its paper currency and capitulating to the dominance of silver in the private economy. The Ming state gradually converted its most import and sources of revenue payments in silver, while suspending emission of paper money and minting in bronze coin.
Though still uncoined, silver prevailed as the monetary standard of the Ming and subsequent Qing dynasty (1644-1911), fueled from the sixteenth century onward by the import of vast quantities of foreign silver from Japan and the Spanish colonies in the Americas. In times of fiscal crisis, such as on the eve of the fall of the Ming dynasty in 1644 and during the worldwide depression of the 1830s to 1840s, appeals to restore paper currency were renewed, but ignored. In the nineteenth century private banks, both Chinese and foreign, began to issue negotiable bills, but the weakness of the central government after its defeat in the Opium War precluded the emergence of a unified currency…Not until 1935, under the Republic of China, did China once again have a unified system of paper currency. (The Origins of Value, p. 87-89)
Although paper money first originated in China, the paper money we use today has no direct lineage with these systems. Government-issued paper money was invented independently in Western Europe, and under very different circumstances. We’ll take a look at that next time.
Despite the importance of paper money in Chinese history, the modern world system of paper money did not develop in China, or even in the Mediterranean homeland of Marco Polo or ibn-Batuta. It evolved in the trading nations around the North Atlantic. (Weatherford, p. 129)