What Is Neoliberalism?

I see a lot of discussions about Neoliberalism where it clear that the people using the term don’t really know what it means. As the historian of Neoliberalism, Philip Mirowski remarked, Neoliberalism has become, for many, “a blanket swear-word for everything they despise, or a brainless synonym for modern capitalism.” And the Guardian similarly notes, “the word has become a rhetorical weapon, a way for anyone left of centre to incriminate those even an inch to their right.” Many people also have trouble defining it. It’s a bit like the definition of porn: hard to define in precise terms, but you know it instinctively when you see examples of it in real life.

But there’s nothing complicated about Neoliberalism. It’s actually quite simple. From a few fundamental axioms, all the political ideas of Neoliberalism can be constructed.

Philosophical Justification

It all has to do with how resources are allocated.

A resource is anything society produces – guns, butter, energy, doctors, medicines, roads, automobiles, iPads, MRI machines, snowplows, BMW’s, eggs, you name it. The resources must be allocated somehow. You want resources to go where they are needed. You don’t want shortages, but you don’t want resources to lie idle either. How do you accomplish this on a societal scale?

Neoliberalism is based on the argument that only markets can allocate resources efficiently.

Everything in Neoliberalism proceeds from this assumption!

The inverse assumption is that alternative means of resource distribution—whether that be central planning, command and control, public provisioning of goods and services, gift economies, rationing, or what have you—are necessarily inefficient. In short:

• Markets = efficient; good allocation of resources (resources match wants and needs).

• Government allocation = inefficient; poor allocation of resources. Resources end up where they don’t belong, surfeits and (especially) shortages result.

Following from this is the idea that since markets are, by their very nature, efficient, they will produce the most goods at the best price, and hence that higher-quality goods will be more available to more people then if government provides them at cost.

Another premise that follows from that is this: a dollar spent in the market by an individual consumer is going to be far more effective (i.e. it will go further) than a dollar spent by the government.

Why? It all has to do with how Neoliberals perceive markets. In their view, markets are aggregates of millions of individual people making decisions. This leads to ideal outcomes that no single planner could comprehend. As Philip Mirowski puts it, Neoliberals see markets as “information processing devices” par excellance; more powerful than any single human brain, but still patterned on the human brain. Governments, by contrast, do not have enough information processing power to allocate resources properly. This is called the calculation problem.

Much of this comes from idea that markets are competitive, and that they naturally head towards equilibrium.

Competition is self-explanatory: many firms competing against one another to deliver the best goods at the best price, such that no one company can overcharge, or produce shoddy goods.

Equilibrium is often expressed in terms of graphs. The ideas is that there are a certain amount of people who want to sell their goods and services at a high price, and a number of buyers who want to pay for those goods and services at a low price. Through millions and millions of individual choices, the argument goes, a price will be arrived at naturally over time through the workings of the “invisible hand,” where the amount of stuff to be sold will exactly match the desire to buy, such that no products will be left unsold, and no buyers will be left unsatisfied. The supply and demand graphs on the blackboard intersect. According to Investopedia, it’s “the state in which market supply and demand balance each other, and as a result, prices become stable.”

The drive to compete, Neoliberals argue, makes private firms “lean and mean,” unlike “wasteful” governments. Private firms cannot be inefficient, they say, because otherwise they would be outcompeted by their rivals. This relentless drive towards efficiency means that private firms can always deliver goods and services more cheaply than any government can.

By contrast, governments do not have to compete within their own borders, the argument goes, and this makes governments inherently inefficient and sclerotic. Neoliberals revel in stories of “lazy government bureaucrats,” and “useless pencil pushers.” They constantly deride “incompetent bureaucrats,” and “wasteful government spending.” Conversely, they like to celebrate stories of “heroic” entrepreneurs sleeping only three hours a night, and “genius” inventors bringing new products to market.

Since it follows that only markets can allocate resources efficiently, it makes sense for governments to utilize markets as well, by purchasing all sort of things from private contractors, rather than producing them directly or owning companies. This, the thinking goes, will save taxpayers money. Often times, government getting into bed linking up with private contractors is expressed by the term, “public-private partnership.

It also follows that anything not allocated by markets will be inefficient, and so things not currently allocated by markets must be transformed into things which can be. This process is called commodification—turning things into commodities to sell, in order that markets can allocate them according to their dictates.

These are the core ideas animating Neoliberalism. That’s why it’s sometimes called “free market fundamentalism” by its critics, or somewhat less generously, “market fetishism.”

Neoliberalism is obsessed with markets. That’s why one of the major Koch-funded Neoliberal/Libertarian thinktanks is called the Mercatus Center. Mercatus is the Latin word for “market” (it was originally called the Center for Market Processes).

Based on the core Neoliberal assumptions above, since markets are efficient and allocate resources well, while governments are inefficient and allocate resources poorly, it makes more sense for people to simply go out and shop for the things they need, rather than having governments provide them directly to the public.

This is the idea behind the mania for vouchers. The government simply hands you a check, and you go out and shop in the one big market for the things that government formerly provided to you. “School choice” is the most obvious example—competition in markets is expected to raise the quality of schools, while the government strips back its commitment to funding free public education for all.

This is also the theory underlying seemingly endless rounds of tax cuts. Rather than the government taking a dollar and doing something for you, the government could leave that dollar in your wallet and let you go out and shop for what you need. The rationale is, that since markets are so much better than governments at allocating resources, a dollar in the market is more effective than a dollar taken by the government as taxes.

The constant refrain is, “you know how to spend *your* money better than governments do.” It’s a great bumper-sticker slogan, and it’s never really questioned or examined.

Their other aspect of this argument is a moral one. Neoliberals are not as extreme in their hatred of taxes as libertarians are; they recognize that the state requires some revenue in order to operate. They don’t consider the very idea of taxes as theft. They reluctantly recognize the need for some bare-bones public services, such as police and firefighters, as well as basic infrastructure. Nevertheless, they agree with Libertarians’ core framing that government revenues must be taken “at the point of a gun!”

Also, you don’t choose what the government spends your money on. To some extent you do, of course, because you elect the representatives that decide on your behalf how the government spends its money and allocates its resources. But given the size and complexity of most modern governments—not to mention the unpopularity of today’s politicians—many people don’t really buy into that argument anymore.

But in markets, Neoliberals say, you are “free to choose.” Any economic transaction, they say, is a completely voluntary transaction that leaves both parties better off, and thus there can be no coercion in markets. Therefore, it is more moral to let people keep more of “their own money,” and spend it ways that they choose, rather than take it away via taxation and spending it in ways that some people might not like or approve.

By “leaving more money in people’s wallets,” people can then go out and buy whatever they need from the markets through shopping, and not have to spend money on things they may not want or need.

That’s the crux of the morality argument: one alternative is spending freely “by choice,” while the other is necessarily coercive, that is, “at the “point of a gun!” In this conception, wealth is generated exclusively by the private sector, which “earns” all the money. The government, on the other hand, can only operate on what is “seizes” from the private sector “by force!” And, as we’ve said before, Neoliberals view governments everywhere as, almost by definition, lazy, inefficient, incompetent, and corrupt.

Neoliberalism places a great emphasis on “personal responsibility.” Rather than governments providing everything for you, they say, it’s your responsibility to be a smart shopper. You need to save adequate money out of your own paycheck for retirement— no more “dependency” on government pensions or “bankrupt” Social Security. You need to make sure you have adequate insurance for any contingency—don’t expect the government to bail you out! And you’d better make sure you have enough savings for any emergency—government is there to make markets, not to help “scroungers” who weren’t “responsible” enough!

Neoliberalism also believes that if markets appear to fail, it must always be due to some sort of “interference” by the state. Interference is anything government does that “distorts” the market from achieving equilibrium—that is, anything that keeps self-adjusting markets from finding their “natural” price for goods and services, where supply matches demand. If markets fail, the thinking goes, it must be the government’s fault in some way. The solution, then, is to strip away such government “interference” as much as possible, and to let the markets do their thing. Often this is done in practice by limiting governments’ very power to intervene in markets via deregulation and legal restrictions, even in the public’s interest. Regulations “distort” markets, they argue, and so Neoliberals always favor deregulation.

So, things like unions, minimum wages, worker safety, environmental regulations, and rent controls, are seen as unnecessary “interference” in the natural workings of the market; that is, they prevent the market from working the way that it should. Free markets cannot fail, they say, they can only be failed.

They also believe that actors in markets are always rational. When people do not act the way Neoliberals say that they should, they advocate something called “nudge theory.” Nudge theory is a kind of social engineering undertaken by governments designed to get people to behave like the idealized consumers that Neoliberalism insists that we all are. “By knowing how people think, we can make it easier for them to choose what is best for them, their families and society,” wrote Richard Thaler, the Nobel (Bank of Sweden) Prize-winning Neoliberal economist who developed the theory. Instead of governments shaping markets, people are shaped to markets.

Since markets theoretically provide everything we could possibly want or need, Neoliberals argue, the government should confine itself to creating and policing markets, and little else. Unlike Libertarians, they realize that markets do not magically spring from nothing, but are created and sustained by government action. Therefore, they do not necessarily favor “small government,” since that might impact markets. Instead, they argue for a superempowered government that can make markets work, but not “interfere” in their inscrutable workings. As noted above, these superempowered governments are also encouraged to engage in large-scale social engineering if it helps create better market outcomes.

An example of the superempowered state is seen with intellectual property rights. To adequately enforce these, governments need extreme spying powers to make sure that no one anywhere in the world is violating the rights of copyright owners by, say, making copies of DVDs, or distributing digital music for free. Just try putting up copyrighted material on YouTube, for example. Or note the dire warnings from the FBI at the beginning of every DVD. Clearly “small government” cannot do these things. Because of this, Neoliberals support the coercive powers of the state insofar as they make markets—international courts for example—but not to do things like provide goods and services to the public, or redistribute.

As Mirowski notes, if a market ever seemingly fails, the Neoliberal solution is to create “new and stranger markets” to alleviate the failure, rather than to impose new regulations.

The classic example here is carbon. Many companies are profitable only because they can freely spew unlimited amounts of carbon into the atmosphere, altering the climate for everyone. The solution, Neoliberals argue, is not to restrict the amount of carbon, as might be expected. Instead, Neoliberals advocate “cap and trade” agreements, where “carbon credits” are traded across the world in strange, new “carbon markets” which are created and sustained by international governments. A new class of traders can then “wheel and deal” with such credits so that carbon spewing can be “priced correctly,” and “allocated efficiently” across the globe. This solution is considered to be simply “common sense” by most politicians and the entire mainstream economics profession, and no other solution is seriously considered.

This is also behind the drive to put a price tag on nature. Rather than restrict our impact on the natural world through rules, restrictions, and regulations, Neoliberals argue that we need to let the market sort it out. In order to accomplish this, they say, nature must be fully accounted for on the balance sheets of corporations. To that end, nature must be transformed into “ecosystem services” which can then be “priced adequately.” Then, the argument goes, markets will no longer see nature as a free service anymore, and everything will then be allocated correctly and efficiently by the “free” market! Again, government regulations are depicted as inherently coercive, with anarchic markets providing true “freedom.” Neoliberals also believe that individual consumer choices will solve serious structural problems like climate change without the need for new regulations (e.g. “eat less meat,” “take shorter showers,” etc.). A small number of Neoliberals deny man-made climate change altogether.

In addition to the moral justifications regarding coercion and freedom, another argument is that the market naturally solves problems through “private initiative” without the need for any sort of government intervention. In other words, regulations are actually counterproductive!

The classic story here is told by the wildly popular Neoliberal economics book, Freakonomics. In that story, manure from horses used for transportation threatens to drown entire cities in mountains of poo in the early twentieth century. Planners look ahead at urban growth during this time period and realize that there is just too much manure for them to possibly deal with. Government regulations are contemplated. Then, the private sector magically solves the problem through the invention of the automobile—no harmful or coercive restrictions required, just plucky “private initiative!” The fact that automobiles transform visible pollution—manure—into invisible pollution—air particles—is conveniently ignored in the story. So, too, are the massive amount of government regulations and infrastructure expenses that the automobile called forth.

The bigger and more all-encompassing markets are, the better off we will all be, claim Neoliberals. That’s why they favor unrestricted globalization and free-trade agreements that create transnational globalized markets, removing all restrictions on the movements of capital, labor, money, goods, people, and information. They see the whole world as one giant market.

This provides a good short explanation:

The economic paradigm which promotes the small state and reliance on market forces is generally known as neo-liberalism, or the Washington Consensus. Under neo-liberalism, the state does little more than maintain the rights of ownership and internal and external security through criminal justice and armed services – notwithstanding, the state may bail out financial services if they require public aid.

In the UK and the USA politicians from both main parties adopted this point of view, often in sincere, if misguided, belief in its validity. Thus, neo-liberalism maintains the appearance of democracy, in that citizens may vote for political leaders, but limits the range of policies on offer to those which are acceptable to markets – or rather, those who command market forces.

Dude, Where’s My Democracy (Open Journal)

Wendy Brown, author of Undoing the Demos: Neoliberalism’s Stealth Revolution, defines Neoliberalism more succinctly as, “a peculiar form of reason that configures all aspects of existence in economic terms.” The Guardian argues that Neoliberalism,

…is a name for a premise that, quietly, has come to regulate all we practise and believe: that competition is the only legitimate organising principle for human activity.

In a similar vein, in a widely read-column, Guardian columnist George Monbiot declares,

Neoliberalism sees competition as the defining characteristic of human relations. It redefines citizens as consumers, whose democratic choices are best exercised by buying and selling, a process that rewards merit and punishes inefficiency. It maintains that “the market” delivers benefits that could never be achieved by planning.

Here are some other definitions by a series of economic thinkers:

Examples of Neoliberalism in Action

It might help to illustrate this by examples of Neoliberal policies in action. These policies, it must be emphasized, are implemented at all levels of government: local, state, federal, and even internationally (the World Bank, the IMF, trade agreements, etc.).

Here’s a local example taken from real life. County M owns and operates its buses through a company owned directly by the county. Later, the county spins off that company into a private entity that runs its bus system. The company itself is technically private, but it has only one customer – the county. From the county’s standpoint, it no longer has to worry about running and managing a business; it only has to allocate a given sum for that company. The company runs and manages itself; the county has washed their hands of it.

But later on, the country decides that it will instead contract with a private “transportation company” operating out of another remote state. This privately-owned company also runs other transportation systems around the country (and perhaps even around the world). The local bus company is subsequently disbanded, throwing a large number of local people out of work. Some may be hired back as drivers, mechanics, etc., but most will not be; those incomes simply disappear. Taxpayer money for the bus company is now funneled to employees in another state across the country, instead of to local citizens. The argument is that this move this will save taxpayer money, despite throwing a lot of people out of work, since private firms operating in a “competitive” environment are always more efficient than government bureaucracies, say the Neoliberal politicians.

Another example is something as simple as, say, police uniforms. Police uniforms used to be made by the city which needed them. Or, they might be made by a local company. But under Neoliberalism, it is simply another purchase in the market from a private company, and that company can be anywhere in the world. So the city purchases police uniforms from China or India, even while its own local factories shut down from lack of work.

Rohan Grey summarizes Neoliberal alternatives to publicly provisioned goods and services:

Generally, speaking, the Progressive position on most things has been, it is better to offer direct public services for free, than to presume that the way to give people access to their basic needs is to give people a check…The classic example here is school vouchers. School vouchers are typically seen—quite rightly I think—as a Neoliberal alternative to public schools. Don’t give people public schools, give them some cash and let the “market” provide their schooling needs for them. Healthcare–don’t give people Medicare for All, give them a health care rebate and let them go purchase health care on the market. Don’t give them public housing, give them a rental tax credit and let them go get housing from private landlords. These are the Neoliberal alternatives to public service provisioning.

This is why there is so much resistance to public health care systems among Neoliberals. To be fair, some Neoliberals have given up arguing that private companies are more effective and efficient than governments because the evidence from the United States is just too overwhelming. But, even if they don’t admit it, the bias against anything associated with the government—and the attempts to preserve private power—continues to be deployed in political rhetoric and policy proposals.

That’s how you end up with something like the Affordable Care Act (Obamacare). Under the ACA, you have to go buy insurance on the market from private insurance companies rather than have it provided to you at cost by the state. You are also required by law to purchase this product (the same goes for auto insurance) or face penalties.

Even now, rather than promoting government-provided healthcare, Neoliberals simply argue for “honest pricing” by doctors and hospitals so that people can see prices upfront, and this will theoretically permit “comparison shopping” to bring prices down to affordable levels. Rather than “free” college of healthcare, they argue for such things to be “affordable,” which is Neoliberal code-speak (c.f. the “Affordable Care Act”).

You see a similar discussion surrounding higher education. Its not price-gouging that’s raising college costs, say Neoliberals, or elite status competition—its the government “distorting” the market for education via student loans! Just make the government stop providing loans to low-income students, they argue, and college prices will magically reset themselves to more acceptable levels thanks to the magic of the market, and more people will be able to afford higher education. If education is a publicly-provided good, they argue, it will not be priced correctly or allocated properly (as per the above concepts). In addition, Neoliberals argue, we do not adequately “appreciate” that which we do not pay for (an argument also made in the case of health care services, hence the existence of co-pays and deductibles).

The same goes for the housing crisis. Just remove building regulations and restrictions, they say, and the market will automatically provide sufficient shelter for people. Just, whatever you do, so not “interfere” in the market with things like rent control, they say, because you will make everyone worse off! The idea that “impersonal market forces” will not solve the housing crisis—or that government has any important role to play besides simply “getting out of the way”—is unthinkable. So, too, is the notion of shelter as a basic human right. Neoliberalism does not believe in human rights, only in what the market provides. Naked Capitalism cheekily defined Neoliberalism by two simple rules: a.) Because markets, and b.) Go die!

Upon close examination, I see Neoliberalism as encompassing these core principles:

  • Monetarism
  • Austerity, “Paygo,” and Balanced Budgets
  • Privatization
  • Globalization
  • Flexible Labor
  • Low Taxes and Supply-Side Economics
  • Financialization
  • Deregulation
  • Vouchers
  • Private Charity

Let’s tackle them one at a time.

Monetarism: This is the idea that “inflation is always and everywhere a monetary phenomenon,” and that “tight money” must be strictly maintained to prevent inflation. That is, the government’s major role is to guarantee the “soundness” of money, rather than take care of people needs, and leave it to the market do the rest.

This policy was implemented as a response the high inflation of the 1970s. Yet over forty years later, with no sign of inflation in sight, it continues to be maintained, regardless of the macroeconomic conditions, the low costs to borrow, or the amount of idle resources sitting around and failing to be utilized by the private sector:

Despite the prolonged existence of idle plant and heavy unemployment among a literate, trained labour force, the United States seems unable to mobilize these resources to rebuild our decaying cities, to revitalize mass transit, to regenerate clear air and waterways, and so on. Why are we so impotent?

Conventional wisdom suggests that any mobilization of idle resources for a war on such things as decay, pollution, poverty will require either additional government expenditures or private sector tax cuts. This means huge deficits financed by increasing the quantity of money which, monetarists claim, can only fuel the fires of inflation. Until we tame the dragon of inflation, we are told, these projects – no matter how desirable – must wait. Conventional wisdom says we must stoically accept tight money and stringent constraint on governmental spending for many years (the long run?) if inflation is to be stopped.

https://link.springer.com/chapter/10.1007/978-1-349-11513-6_20

Monetarists continue to argue that government spending generates unacceptable inflation. But, increasingly, this is seen as out of date, as David Graeber describes:

Economists still teach their students that the primary economic role of government—many would insist, its only really proper economic role—is to guarantee price stability. We must be constantly vigilant over the dangers of inflation. For governments to simply print money is therefore inherently sinful.

If, however, inflation is kept at bay through the coordinated action of government and central bankers, the market should find its “natural rate of unemployment,” and investors, taking advantage of clear price signals, should be able to ensure healthy growth.

These assumptions came with the monetarism of the 1980s, the idea that government should restrict itself to managing the money supply, and by the 1990s had come to be accepted as such elementary common sense that pretty much all political debate had to set out from a ritual acknowledgment of the perils of government spending. This continues to be the case, despite the fact that, since the 2008 recession, central banks have been printing money frantically in an attempt to create inflation and compel the rich to do something useful with their money, and have been largely unsuccessful in both endeavors.

Against Economics (NY Review of Books)

Counter-monetarists argue that most money is created, in effect, by private sector bank lending, and thus restricting government spending has little effect on money creation in the overall economy (the idea that money creation by private entities effects the macroeconomy is called the endogenous money theory.).

Since modern money is simply credit, banks can and do create money literally out of nothing, simply by making loans. Almost all of the money circulating in Britain at the moment is bank-created in this way. Not only is the public largely unaware of this, but a recent survey by the British research group Positive Money discovered that an astounding 85 percent of members of Parliament had no idea where money really came from (most appeared to be under the impression that it was produced by the Royal Mint).

Against Economics (NY Review of Books)

Austerity, Paygo and Balanced Budgets. These are all of a piece. Neoliberals argue that too much public debt is inherently a drag on the economy, and thus distorts markets. Therefore they argue, government must not spend more than it collects in tax revenues, even on the national level. In other words, the government at all levels must balance its budgets, just like you and me! They even advocate for this principle to be enshrined into law. Of course, this puts severe restriction on what governments can do (these ideas are dropped like a bad habit, however, when it comes to funding wars and bailouts). Per Wikipedia:

Austerity is a set of political-economic policies that aim to reduce government budget deficits through spending cuts, tax increases, or a combination of both…The measures are meant to reduce the budget deficit by bringing government revenues closer to expenditures, which is assumed to make the payment of debt easier. Austerity measures also demonstrate a government’s fiscal discipline to creditors and credit rating agencies.

“Paygo” means that any new spending proposals by politicians must account for every penny of the new spending via new taxes. This essentially limits what governments can spend to how much they can tax. That is, they cannot spend without first taxing the equivalent amounts “away from” the private sector. It also means that new government initiatives will always be inherently unpopular, since they will always necessitate a rise in people’s taxes. Paygo is heavily supported by the so-called “Leftist” Democratic party in the United States (or at least by the people running it).

Austerity in practice means that government programs that serve the public—especially low-income and vulnerable citizens—are pared back in an attempt to cut government budget deficits, even during a recession. It’s often couched in moral rhetoric; that is “we spent like drunken sailors” during the “good times,” and now the time has come to “pay the piper.” Crucial to this argument, of course, is that government debt is bad, and that governments are just like households. David Graeber describes how austerity unfolded in the United Kingdom following the banking crisis:

It was center-left New Labour that presided over the pre-crash bubble, and voters’ throw-the-bastards-out reaction brought a series of Conservative governments that soon discovered that a rhetoric of austerity—the Churchillian evocation of common sacrifice for the public good—played well with the British public, allowing them to win broad popular acceptance for policies designed to pare down what little remained of the British welfare state and redistribute resources upward, toward the rich.

“There is no magic money tree,” as Theresa May put it during the snap election of 2017—virtually the only memorable line from one of the most lackluster campaigns in British history. The phrase has been repeated endlessly in the media, whenever someone asks why the UK is the only country in Western Europe that charges university tuition, or whether it is really necessary to have quite so many people sleeping on the streets.

Against Economics (NY Review of Books)

The contrary belief is that the a sovereign government is not revenue constrained at the national level, and additional spending would create additional revenue; in other words, spending precedes taxation, and not vice-versa. Also, national debts are debts owed primarily to ourselves, by and large. A government is not like a household; one person’s spending is another’s income.

In addition, much of the current public debt was incurred by putting the private sector’s debts—run up mainly by a tiny investor class–onto the public books. Thus, it is unfair to ask the public—especially the most vulnerable citizens—to pay for the excesses of the rich, argue critics. Such criticisms are dismissed by Neoliberals.

Privatization. The philosophical justification for this is provided above. The argument is that private firms competing in markets are inherently efficient, and that government is not, since governments are not subject to competitive market forces and the drive to maximize profit.

Thus, by selling off government assets to private firms in order to run them, Neoliberals argue, they will be run more efficiently and effectively. This will save taxpayer money, they claim.

Although theoretically justified under idealized markets, empirical evidence of such extensive savings has never materialized. Yet privatization and the permanent selling off of public assets in order to raise money to cover short-term budget shortfalls persists nevertheless.

Ironically, this has also unfolded against an unprecedented monopolization of the private sector, often in business sectors that the public has little direct contact with.

The inverse of privatization is nationalizing assets. The U.S. commonly nationalizes assets only when they are failing, bails them out with public money, and then turns them back over to the same people who caused the failure. Other nationalizations—particularity of successful or profitable businesses—have not been contemplated since the 1980’s.

In other sovereign countries where profitable resources or businesses have been nationalized in the Neoliberal era, it has often been followed by an invasion, economic sanctions, or a coup backed by Western governments, especially the US and Great Britain (Iran, Chile, Venezuela, Bolivia, etc.).

Globalization. This is intrinsically wrapped up with “free trade” agreements. The idea is that global free trade makes everyone better off. It’s easy to see how the rhetoric of idealized markets leads to this conclusion.

In practice, globalism meant that labor was allocated to where it was cheapest, i.e. the developing world—especially places like China and India—but also places like Southeast Asia and Latin America.

Of course this meant that the people in those developing countries saw their wages rise relative to where they were before. In addition, many people who had been subsistence farmers working for themselves are now manufacturing employees working for wages in overcrowded cities.

What this means is that, on paper, incomes for much of the developing world rose relative to what they were before.

This has subsequently provided the major moral justification for Neoliberalism by its most prominent spokespeople: it has “raised millions of people out of poverty,” by harnessing the remarkable power of “free markets.”

Meanwhile, The rampant poverty, economic devastation, despair, hopelessness, and even shortened life expectancy in the developed world are waved away by Neoliberals, because numerically, when all the sums are totaled up, on balance more people are relatively better off than worse off across the entire planet! This is documented by the so-called “elephant chart” developed by economist Branko Milanovic. This chart also demonstrates that the world’s richest people have profited more than anyone else in absolute and relative terms under Neoliberalism.

This reduction in global poverty is attributed to “freeing” markets from their shackles, claim Neoliberal apologists, and any rollback or restrictions would condemn millions to poverty, they say. As far as the “losers” of globalization go, well, that’s just too bad. Neoliberals typically advocate for “retraining” such people to compete in the new job markets, or for “more education,” or, barring that, to simply “move to where the jobs are.” If that doesn’t work, see rule b.), above.

Flexible Labor. By making it much easier to hire and fire people, the thinking goes, labor will be allocated more efficiently, which leads to economic expansion that makes more jobs available overall, and so is actually better for everyone, even if it seems like it’s not.

This also ties into ideas about immigration. Neoliberals tend to favor open borders, with the idea that people will flow around the world like capital to where they will be most needed.

An ideal example of the deregulation of labor protections and the increase in “flexible” labor are so-called zero-hours contracts. As the BBC describes them, “Zero-hours contracts, or casual contracts, allow employers to hire staff with no guarantee of work. They mean employees work only when they are needed by employers, often at short notice. Their pay depends on how many hours they work.”

Another example is the “gig economy,” which has created a whole new class of precarious workers. The “gig economy” is where workers are classified as “independent contractors,” and hence companies that utilize the workers’ labor are exempt from having to provide any benefits (such as Uber and Taskrabbit).

…the gig economy provides a new way of concealing employers’ authority. People who work for such online platforms as Uber, Lyft and Deliveroo are classed not as employees but as self-employed. They are supposedly flexible entrepreneurs, free to choose when they work, how they work and who they work for.

In practice, this isn’t the case. Unlike performers in the entertainment industry (which gives the ‘gig’ economy its name), most gig workers don’t work for an array of organisations but depend for their pay on just one or two huge companies. The gig worker doesn’t really have much in common with the ideal of the entrepreneur – there is little room in their jobs for creativity, change or innovation – except that gig workers also take a lot of risks: they have no benefits, holiday or sick pay, and they are vulnerable to the whims of their customers.

In many countries, gig workers (or ‘independent contractors’) have none of the rights that make the asymmetry of the employment contract bearable: no overtime, no breaks, no protection from sexual harassment or redundancy pay. They don’t have the right to belong to a union, or to organise one, and they aren’t entitled to the minimum wage. Most aren’t autonomous, independent free agents, or students, part-timers or retirees supplementing their income; rather, they are people who need to do gig work simply to get by.

What is new about the gig economy isn’t that it gives workers flexibility and independence, but that it gives employers something they have otherwise found difficult to attain: workers who are not, technically, their employees but who are nonetheless subject to their discipline and subordinate to their authority. The dystopian promise of the gig economy is that it will create an army of precarious workers for whose welfare employers take no responsibility. Its emergence has been welcomed by neoliberal thinkers, policymakers and firms who see it as progress in their efforts to transform the way work is organised.

What counts as work? (London Review of Books)

This concept of flexible labor also eschews what it describes as “distortion” of the labor market. It regards things like minimum wages, unions, and any restrictions on hiring/firing employees to be such “distortions.”

A draft of the World Bank’s annual flagship World Development Report says that its creditor-states (the poorest countries in the world) should eliminate their minimum wage rules, allow employers to fire workers without cause, and repeal laws limiting abusive employment contract terms. The bank argues that this is necessary to stop employers from simply investing in automation and eliminating workers altogether.

Are there no workhouses? (BoingBoing)

Low Taxes and Supply-Side Economics. By leaving more money in the private economy, with less of it in the public purse, that money will be invested more wisely in markets by the investor class than anything the government does with it, claim Neoliberals. That because the investors are seeking profits, while the government is not. It’s “your money” they say.

According to Investopedia, Supply-side economics is, “the controversial idea that greater tax cuts for investors and entrepreneurs provide incentives to save and invest, and produce economic benefits that trickle down into the overall economy.” Essentially, since government is the problem, if less money winds up in the hands of government, and more in the hands of the private sector, investors, and entrepreneurs; then everyone will be better off. Recall that resource allocation argument, above.

In the past, this has been derided as “horse and sparrow” economics. The idea is that if you feed enough oats to the horse, some will eventually pass through to feed the sparrows. Yet it remains a core feature of Neoliberalism.

Financialization and financial engineering. Money is made through financial speculation rather than creating and selling real goods and services. All sorts of new and exotic financial instruments are created and traded in order to make money. The size of the financial sector (banks, trading houses, hedge funds, etc.) relative to the real productive economy increases dramatically. Gambling becomes the economy’s most profitable activity.

Many of the largest companies in the United States are highly financialized, distributing almost all, and often more, of their profits to shareholders in the form of stock buybacks and cash dividends…In their book, Predatory Value Extraction…William Lazonick and Jang-Sup Shin call the increase in stock buybacks since the early 1980s “the legalized looting of the U.S. business corporation,” while in a forthcoming paper, Lazonick and Ken Jacobson identify Securities and Exchange Commission Rule 10b-18, adopted by the regulatory agency in 1982 with little public scrutiny, as a “license to loot.” A growing body of research, much of it focusing on particular industries and companies, supports the argument that the financialization of the U.S. business corporation, reflected in massive distributions to shareholders, bears prime responsibility for extreme concentration of income among the richest U.S. households, the erosion of middle-class employment opportunities in the United States, and the loss of U.S. competitiveness in the global economy.

Financialization of the U.S. Pharmaceutical Industry (Naked Capitalism)

So, for example, automobile companies make more money by selling loans for cars than they do by making and selling the actual cars. This idea is subsequently expanded to all businesses in all sectors. The earnings of companies are increasingly plowed into investments rather than back into the business. Companies inflate their stock price through stock buybacks. Consumption is increasingly financed by borrowing from financial institutions. Everything is leveraged through debt. As the Roosevelt Institute notes:

…financialization has brought about a “portfolio society,” one in which “entire categories of social life have been securitized, turned into a kind of capital” or an investment to be managed. We now view our education and labor as “human capital,” and we imagine every person as a little corporation set to manage his or her own investments. In this view, public functions and responsibilities are mere services that should be run for profit or privatized, or both.

Some effects of financialization include: Hedge funds, asset-stripping, stock buybacks, leveraged buyouts, hostile takeovers, pump and dump, derivatives, private equity, junk bonds, shareholder primacy, and offshore tax havens.

Much of this has been enabled by financial deregulation. Many of the above things used to be illegal.

Deregulation. Deregulation is based the idea that people in markets are rational actors with perfect information, and that markets are self-adjusting in the long run. The reasoning is quite complex and involves a lot of math (the relationship between the math and the real world is rather questionable, however). But the core idea is that since no single person or entity controls the market, the market cannot remain wrong forever. In other words, the aggregate of millions of people making individual decisions means that the market is self-correcting; “irrational” behavior will always be weeded out in the end, and everything will end up at it’s “natural” price just as water runs downhill, including financial assets such as stocks and bonds.

Thus, regulations are not needed, and are counterproductive. By giving actors in the market total “freedom,” we will all be made better off, argue Neoliberals. Also, since transactions in the market are all about “voluntary contracts” that are “freely negotiated,” the thinking goes, there is no need to step in and constrain people’s “freedom” to make whatever choices they wish. Laws that attempt to protect the public are derided as the “nanny state.” Again, this is dressed up in the rhetoric of “freedom” versus “coercion, which is why so many Neoliberal and Libertarian think-tanks have the words “freedom” and “liberty” in their names.

“Beginning in the latter half of the Obama administration, Federalist Society gatherings grew increasingly fixated on diminishing the power of federal agencies to regulate businesses and the public — an agenda that would severely weaken seminal laws such as the Clean Air Act and the Clean Water Act. On Monday, Justice Brett Kavanaugh signaled that he is on board with this agenda.”

Brett Kavanaugh’s latest opinion should terrify Democrats (Vox)

Furthermore, in addition to restricting “freedom,” it is argued that regulations “stifle growth.” Economic growth for its own sake is the lodestar of Neoliberalism, and anything that gets in the way of growth is bad according to them. For example, this is from the Web site of what’s been called the “libertarian internationale,” the Atlas Network (named for the Ayn Rand novel, Atlas Shrugged):

These massive regulatory codes have serious effects on taxpayers, namely that they restrict their freedoms and ability to do business, and, more broadly speaking, too much regulation stifles economic growth and jobs creation. To illustrate this point, recent research from Mercatus indicates that the accumulation of rules over the past several decades has slowed economic growth, amounting to an estimated $4 trillion loss in U.S. GDP in 2012.

Mercatus Center Case Study: Provide [sic] a Catalyst for Reining in Out-of-Control Government Regulations (The Atlas Network)

Vouchers. We covered this above. Since only markets can allocate resources effectively, Neoliberals claim, it makes more sense to give you vouchers so that you can go out and shop for all your needs, rather than letting the government provide them for you.

Their first preference is, as I said, to “leave more money in your pockets!” But for essential services that people might not have enough money to buy on their own, rather than governments providing them, Neoliberals advocate for vouchers so that you can go out and wheel and deal in the market.

Of course, what happens if the checks don’t go quite far enough? Well, since the market cannot fail, and its prices are typically in “equilibrium”, then you simply have to go without.

Also because vouchers are only given to certain targeted groups, it becomes very easy to demonize the recipients of vouchers as “lazy” or “moochers”, as opposed to benefits that are provided to everyone as a condition of citizenship, where everyone benefits. This leads to vouchers being stripped back, or even eliminated over time by Neoliberal politicans.

Private Charity. Have you noticed that in nearly every store you go to today, you are asked by these these huge mega-corporations (either by a low-paid employee or the card reader) to make a donation to some sort of private charity? Even Amazon—owned by the wealthiest person to ever live—hits up everyone who comes to the site for donations to “private charity”.

Yep, that’s Neoliberalism in action, too.

Because Neoliberalism celebrates minimalist government, it doesn’t like the idea of government providing a social safety net for people. They believe that governments taxing people to alleviate poverty is “coercive.” By contrast, charity donations are “voluntary” and thus, more moral. They believe that private charities are they way to meet the needs of the most vulnerable citizens, rather than government taking responsibility for its citizens.

In addition, the “winners” in the economy make a big show of donating large parts of their incalculable fortunes to various “good causes”, and this is seen as proof positive that private charity can alleviate social problems more effectively than any governments can.

If a single cultural idea has upheld the disproportionate power of this [the winners of our new Gilded Age], it has been the idea of the “win-win.” They could get rich and then “give back” to you: win-win. They could run a fund that made them sizable returns and offered you social returns too: win-win. They could sell sugary drinks to children in schools and work on public-private partnerships to improve children’s health: win-win. They could build cutthroat technology monopolies and get credit for serving to connect humanity and foster community: win-win.

How America’s Elites Lost Their Grip in 2019 (Time)

In other words, it follows from their glorification of private initiative: the privatization of the social safety net.

Conclusion

Hopefully, that clears things up a bit. I’ve tried more to explain the way Neoliberals think about the world than a detailed critique of the ideas. That’s material for another post. But I’m sure you’ve realized why Neoliberalism persists—and will persist—no matter how often it is invalidated by real life circumstances. What it has done very well is empower the richest people on the planet, and redistribute all our wealth upward like never before. That is why its theoretical justifications—which I hope I’ve adequately explained—continue to be propagated by politicians, economists, academics, and the mainstream corporate media, no matter how badly they’ve failed in the real world, or the sophistication of the arguments against them.

If you want to learn more, here are some of the best resources that I’ve found:

First, listen to this documentary: Is Neoliberalism Destroying the World? from the Canadian Broadcasting Corporation.

Neoliberalism: Political Success, Economic Failure (Naked Capitalism)

Neoliberalism: the idea that swallowed the world (Guardian Long Read)

Neoliberalism, the Revolution in Reverse (The Baffler)

The Demise of Kinship in Europe

We’ve talked extensively about how the basic constituent of human society is the extended kinship group. In many parts of the world, this is still the default form of human social organization. If there is any “natural” form of human social organization discernible from evolutionary biology, this is it.

From it all the basic structures of traditional societies are derived: religion, politics, law, marriage, inheritance, etc. We’ve frequently mentioned Henry Sumner Maine’s book, Ancient Law. The entire book can be summed up in the following passages:

[A]rchaic law … is full, in all its provinces, of the clearest indications that society in primitive times was not what it is assumed to be at present, a collection of *individuals*. In fact, and in the view of the men who composed it, it was an *aggregation of families*. The contrast may be most forcibly expressed by saying that the *unit* of an ancient society was the Family, of a modern society the Individual.

We must be prepared to find in ancient law all the consequences of this difference. It is so framed as to be adjusted to a system of small independent corporations. It is therefore scanty, because it is supplemented by the despotic commands of the heads of households. It is ceremonious, because the transactions to which it pays regard resemble international concerns much more than the quick play of intercourse between individuals. Above all it has a peculiarity of which the full importance cannot be shown at present.

It takes a view of *life* wholly unlike any which appears in developed jurisprudence. Corporations *never die*, and accordingly primitive law considers the entities with which it deals, i. e. the patriarchal or family groups, as perpetual and inextinguishable. This view is closely allied to the peculiar aspect under which, in very ancient times, moral attributes present themselves.

The moral elevation and moral debasement of the individual appear to be confounded with, or postponed to, the merits and offences of the group to which the individual belongs. If the community sins, its guilt is much more than the sum of the offences committed by its members; the crime is a corporate act, and extends in its consequences to many more persons than have shared in its actual perpetration. If, on the other hand, the individual is conspicuously guilty, it is his children, his kinsfolk, his tribesmen, or his fellow-citizens, who suffer with him, and sometimes for him.

It thus happens that the ideas of moral responsibility and retribution often seem to be more clearly realised at very ancient than at more advanced periods, for, as the family group is immortal, and its liability to punishment indefinite, the primitive mind is not perplexed by the questions which become troublesome as soon as the individual is conceived as altogether separate from the group.

https://oll.libertyfund.org/titles/2001#Maine_1376_90 (italics in original, emphasis mine)

On the difference between laws based on lone individuals, and laws based on social groups, he writes:

…It will be observed, that the acts and motives which these theories [of jurisprudence] suppose are the acts and motives of Individuals. It is each Individual who for himself subscribes the Social Compact. It is some shifting sandbank in which the grains are Individual men, that according to the theory of Hobbes is hardened into the social rock by the wholesome discipline of force…

But Ancient Law, it must again be repeated, knows next to nothing of Individuals. It is concerned not with Individuals, but with Families, not with single human beings, but groups. Even when the law of the State has succeeded in permeating the small circles of kindred into which it had originally no means of penetrating, the view it takes of Individuals is curiously different from that taken by jurisprudence in its maturest stage. The life of each citizen is not regarded as limited by birth and death; it is but a continuation of the existence of his forefathers, and it will be prolonged in the existence of his descendants…

https://oll.libertyfund.org/titles/2001#Maine_1376_164

As we saw last time, these are called identity rules, as opposed to personal rules, which deal mainly with specific, unique, individuals; and general rules, which theoretically apply to everyone equally, regardless of one’s rank, kinship group, ethnic background, religious beliefs, wealth, or any other intrinsic characteristic.

Last time we saw that general rules came about because it became impossible for rulers to sort people by religion after the Catholic Church fragmented, despite numerous failed attempts by “all the king’s horses and all the king’s men” to put Humpty Dumpty back together again. Religious minorities began springing up all over Europe like mushrooms after a rain, challenging the old ways of ruling. Martin Luther only wanted to reform the universal Church; instead he broke it apart. Luther’s emphasis on a personal relationship with God through reading the Bible directly (something that was only possible in Early Modernity), meant that the intermediaries between man and God—the Church and priesthood—saw their power and influence diminish. This, in turn, empowered ambitious Early Modern rulers.

General rules supplanted the ancient laws described by Maine above, leading to a more fragmented and individualistic society. This, in turn, allowed for the commodification of land and labor which is necessary for capitalism. For example, the selling off of the monasteries seems to have kickstarted off the first large real estate markets in England. As Maine argued, status became replaced by contract; Gemeinschaft became supplanted by Gesellschaft.

But, in reality, individualism in Europe was under way long before that.

Wither Tribes?

Europe has long shown a curious lack of extended kinship groups, that is, tribes. If you’ve read Roman history, you know that the Western Empire came under pressure by large migrations of tribal peoples that we subsume under the label “Germanic”, due to their languages, along with some other exotic breeds like the Asiatic Huns (the ancestors of modern-day Hungarians). Their tribal structure, from what little we can determine, seems to have been quite similar to tribal peoples the world over, including in North America, Africa, and Asia.

Location of the Germanic tribes on the border of the Roman Empire before the Marcomannic Wars ca. 50AD by Karl Udo Gerth (2009)

Location of the Germanic tribes on the border of the Roman Empire before the Marcomannic Wars ca. 50AD by Karl Udo Gerth (2009) [Source]

I’m sure you can recall the names of some of them: the Lombards, the Alemanni, the Burgundians, the Lombards, the Visigoths, the Ostrogoths, the Frisians, the Angles and Saxons, the Beans and Franks, and many, many more. The Goths managed to devastate the Roman Empire despite their mopey attitudes and all black clothing, while the Vandals left spraypaint up and down the Iberian peninsula and down into North Africa.

As I said last time, ancient societies were collectivist by default. But this all changed, particularly in Western Europe. But why Europe? Why was Europe the apparent birthplace of this radically new way of life?

That’s the subject of the paper I’m discussing today, which has received a fairly large  amount of press attention. The paper itself is 178 pages—basically a small book (although much of that is data). The idea is that these extended kinship groups were broken up by the Roman Catholic Church via it’s strict prohibition against marriages between close kin, especially between cousins.

[A] new study traces the origins of contemporary individualism to the powerful influence of the Catholic Church in Europe more than 1,000 years ago, during the Middle Ages.

According to the researchers, strict church policies on marriage and family structure completely upended existing social norms and led to what they call “global psychological variation,” major changes in behavior and thinking that transformed the very nature of the European populations.

The study, published this week in Science, combines anthropology, psychology and history to track the evolution of the West, as we know it, from its roots in “kin-based” societies. The antecedents consisted of clans, derived from networks of tightly interconnected ties, that cultivated conformity, obedience and in-group loyalty—while displaying less trust and fairness with strangers and discouraging independence and analytic thinking.

The engine of that evolution, the authors propose, was the church’s obsession with incest and its determination to wipe out the marriages between cousins that those societies were built on. The result, the paper says, was the rise of “small, nuclear households, weak family ties, and residential mobility,” along with less conformity, more individuality, and, ultimately, a set of values and a psychological outlook that characterize the Western world. The impact of this change was clear: the longer a society’s exposure to the church, the greater the effect.

Around A.D. 500, explains Joseph Henrich, chair of Harvard University’s department of human evolutionary biology and senior author of the study, “the Western church, unlike other brands of Christianity and other religions, begins to implement this marriage and family program, which systematically breaks down these clans and kindreds of Europe into monogamous nuclear families. And we make the case that this then results in these psychological differences.”

Western Individualism Arose form the Incest Taboo (Scientific American)

The medieval Catholic Church may have helped spark Western individualism (Science News)

Although reported as if it were some sort of new discovery, this concept is hardly new. In fact, this hypothesis has been around for quite a long time—since at least the 1980’s. Francis Fukuyma’s book, “The Origins of Political Order,” even has a chapter entitled, “Christianity Undermines the Family,” where he expounds this hypothesis in detail. As another example, the most popular post on the notorious hbd chick’s blog is entitled, whatever happened to european tribes? (hbd chick does not use capital letters), and dates from 2011. She quotes a paper from Avner Grief (whom we met last time): “Family structure, institutions, and growth – the origin and implications of Western corporatism”.

“The medieval church instituted marriage laws and practices that undermined large kinship groups. From as early as the fourth century, it discouraged practices that enlarged the family, such as adoption, polygamy, concubinage, divorce, and remarriage. It severely prohibited marriages among individuals of the same blood (consanguineous marriages), which had constituted a means to create and maintain kinship groups throughout history. The church also curtailed parents’ abilities to retain kinship ties through arranged marriages by prohibiting unions in which the bride didn’t explicitly agree to the union.

“European family structures did not evolve monotonically toward the nuclear family nor was their evolution geographically and socially uniform. However, by the late medieval period the nuclear family was dominate. Even among the Germanic tribes, by the eighth century the term family denoted one’s immediate family, and shortly afterwards tribes were no longer institutionally relevant. Thirteenth-century English court rolls reflect that even cousins were as likely to be in the presence of non-kin as with each other.

Hbd chick speculates as to why this might be the case (again, no caps for her):

the leaders of the church probably instituted these reproductive reforms for their own gain — get rid of extended families and you reduce the number of family members likely to demand a share of someone’s legacy. in other words, the church might get the loot before some distant kin that the dead guy never met does. (same with not allowing widows to remarry. if a widow remarries, her new husband would inherit whatever wealth she had. h*ck, she might even have some kids with her new husband! but, leave her a widow and, if she has no children, it’s more likely she’ll leave more of her wealth to the church.)

but, inadvertently, they also seem to have laid the groundwork for the civilized western world. by banning cousin marriage, tribes disappeared. extended familial ties disappeared. all of the genetic bonds in european society were loosened. society became more “corporate” (which is greif’s main point).

whatever happened to european tribes (hbd chick)

Cousin Marriage? Ewww!

Now, for us Westerners, the idea of marrying your cousin is kind of gross (which might be an additional confirmation of the thesis). If you’re in the United States, jokes about marrying your cousin and inbreeding are common to use against people living in Appalachia. The movie Deliverance cemented this in the popular consciousness.

But if you know anything about anthropology, you know that cousin marriage isn’t all that uncommon around the world; in fact in some societies it’s considered the most desirable match! Societies use kinship terms to distinguish between parallel and cross-cousins. In most societies, cross-cousin marriage is okay (maybe even preferred), but parallel cousin marriage is a no-no. That’s why the term for “sibling” in many languages often encompasses parallel cousins. That is, marrying your parallel cousin is the same as marrying your brother or sister, i.e. it’s incest. What the Church did, then, was greatly expand the definition of incest:

In many societies, differentiated cousin-terms are presriptive of the people one can/should or is forbidden to marry. For example, in the Iroquois kinship terminology, parallel cousins (e.g. father’s brother’s daughter) are likewise called brother and sister–an indication of an incest taboo against parallel cousin-marriage. Cross-cousins (e.g. father’s sister’s daughter) are termed differently and are often preferred marriage partners. [1]

And, of course, the choice of marriage partners in a hyper-localized world with basically nothing in terms of mass communication and very little in the way of long-distance transportation would have been much more restricted than we are used to. The simple invention of the bicycle in the 1800s caused marriage partners to become more differentiated:

The likelihood of finding a suitable marriage partner depends not only on the degree to which one becomes acquainted with the possible marriage partners in a region but also on the changing boundaries of what constitutes a region. A great many studies, on all parts of the globe, have demonstrated that most people tend to marry someone living close by. On foot in accessible terrain – that is, no mud, rivers, mountains, and gorges – one can perhaps walk 20 kilometers [12.4 miles] to another village and walk the same distance back on the same day.

This distance comes close to the limit of trust that separated the known universe from the “unsafe” world beyond. If marriage “horizons” expanded, young suitors would be able to meet more potential marriage partners. The increase in the means and speed of transportation brought about by new and improved roads and canals, and by new means of transport such as the train, the bicycle, the tram, and the motorcar brought a wider range of potential spouses within reach. These new means of transport increased the distance one could travel during the same day, and thus expanded the geographical marriage horizon. [2]

Arranged marriages between kin are designed to keep land and wealth in the same extended kinship lines, rather than breaking them up or turning them over to other families. In societies where lineages are ranked, losing such land and property means a downgrade in social status. That’s why you get extreme versions like sibling marriage in ancient Egypt (with the associated birth defects). Even in fairly modern times, European royalty had a very small pool of suitable marriage partners to choose from (Prince Philip is, in fact, a distant cousin of Queen Elizabeth—no jokes about Prince Charles, please).

Impact of Europe’s Royal Inbreeding: Part II

Although in the modern, developed world, cousin marriage is fairly rare, it’s somewhat more common in societies which are often labelled “traditional”. It does occur among some communities even in the West, however: Did my children die because I married my cousin? (BBC). And I’ve always found a great irony in the fact that Darwin himself married his first cousin.

So, for anthropologists, the prohibition against cousin marriage is a big deal.

WEMP and HL

Anthropologists and historians also discern a different and distinct marriage pattern in medieval Western Europe from much of the rest of the contemporary world; distinct enough to merit the uninspired name of the Western European Marriage Pattern (WEMP). It’s distinctive features are:

– Strict monogamy, i.e. no polygyny. We think of this as normal, but in terms of sheer numbers, most cultures have been polygynous (one man being able to marry multiple wives). Monogamy was the norm for Indo-European cultures even before Christianity (e.g ancient Greece, Rome, India).

– Relatively late age of first marriage. Many cultures married off women at puberty or shortly thereafter – anywhere from 13 to 16 years old. This was seen as necessary in an era of high infant and maternal mortality. But in Europe, both men and women married much later—often in their late twenties, or even older for men. Also, the difference in ages between men and women was slight—typically only a few years. Yet in many parts of the world even today, very young women will be married off to prestigious men who are old enough to be their grandfather! Some people, of course, still lament this change, specifically Judge Roy Moore and everyone involved with Jeffrey Epstein.

– Divorce was difficult to obtain. Marriage was seen as a lifetime commitment, and divorce was accordingly hard to get – just ask Henry the Eighth. Of course, given higher mortality rates – especially in childbirth – in practice this meant “till death do us part” was less of a commitment back then. Today we practice serial monogamy – one partner at a time, but less of a lifetime commitment.

– Many people not marrying at all. See: medieval singlewomen.

– Marriage was voluntary on the part of the woman. No forced marriages here (unless it was to secure some kind of political alliance).

– Fewer children. Rather than just pump out a litter, European couples had fewer children, yet the population still grew overall. No one is quite sure why, but the relatively high status of women may have had something to do with it. Of course, it’s harder to have a large number of children with just one wife, although some people like J.S. Bach managed to do it. As Wikipedia summarizes, “women married as adults rather than as dependents, often worked before marriage and brought some skills into the marriage, were less likely to be exhausted by constant pregnancy, and were about the same age as their husbands.”

– Neolocal households and “nuclear” families. Leaving your parents’ household and establishing your own separate household is, again, fairly standard for us Westerners, but in many places it is atypical. Married couples often live with their extended families in much of the rest of the world: Africa, Asia, Oceania, etc. Even in eastern Europe it was fairly common for couples to live in an extended family household under the control of a patriarch (leading to all sorts of drama). Speaking of Eastern Europe:

The reason it’s called the *Western* European Marriage Pattern is because there is an imaginary line dividing it from the rest of the continent. The divergence in marriage patterns and inheritance practices was discovered by a demographer called John Hajnal, and hence it is called the Hajnal Line (HL). It runs roughly from Trier to St. Petersburg. Some areas of Western Europe, such as Ireland and parts of southern Europe, are also “outside” the Hajnal line as well.

To the west of the Hajnal line, about half of all women aged 15 to 50 years of age were married at any given time while the other half were widows or spinsters; to the east of the line, about seventy percent of women in that age bracket were married at any given time while the other thirty percent were widows or nuns.

The marriage records of Western and Eastern Europe in the early 20th century illustrate this pattern vividly; west of the Hajnal line, only 25% of women aged 20–24 were married while to the east of the line, over 75% of women in this age group were married and less than five percent of women remained unmarried. Outside of Europe, women could be married even earlier and even fewer would remain celibate; in Korea, practically every woman 50 years of age had been married and spinsters were extremely rare, compared to 10–25% of women in western Europe age 50 who had never married.

Western European Marriage Pattern (Wikipedia)

Exposure to the Church

The idea is that the difference was brought about by the actions of the Catholic Church. More exposure to the Church meant weaker families and less kinship ties; less exposure meant that the “default” extended family system was maintained.

Furthermore, there are some ideas that follow from that:

– Western Europeans have weaker family ties.

– Western Europeans have a greater sense of individualism and independent thinking, and a correspondingly higher tolerance for deviants and misfits than other cultures.

– Both of these traits were crucial for the development of capitalism.

The idea is that, since extended kinship groups and tribes disappeared, inclusive institutions were formed in Europe by necessity rather than elsewhere. These inclusive institutions, as we saw last time, were critical for the development of general rules and Liberalism. Those developments, in turn, allowed for disruptive institutions of capitalism, as described by Marx, to rework social relations: “all that is solid melts into air.” Those developments led Western Europe to subsequently dominate the modern world. For example, this paper from 2017 by one of the new paper’s co-authors advances the hypothesis that institutional developments gave Western Europe the edge:

Why did Europe pull ahead of the rest of the world? In the year 1000 AD many regions like China or the Middle Easter [sic] were more advanced than Europe. This paper contributes to this debate by testing the hypothesis that the Churches’ [sic] medival [sic] marriage regulations constituted an important precondition for Europe’s exceptional economic development by fostering inclusive institutions. In the medieval period, Churches instituted marriage regulations (most prominently banning kin-marriages) that destroyed extended kin-networks. This allowed the formation of a civic society and inclusive institutions. Consistent with the idea that those marriage regulations were an important precondition for Europe’s institutional development, I present evidence that Western Church exposure already fostered the formation of city level inclusive institutions before 1500 AD

An important building block of the argument is that extended kin networks are detrimental to the formation of a civic society and inclusive institutions. The European kin-structure is unique in the world with the nuclear family dominating and kin marriages are almost absent. In parts of the world, first and second cousin-marriages account for more than 50 percent of all marriages. Kin-marriages lead to social closure and create much tighter family networks compared to less fractionalized societies where the nuclear family dominates for biological, sociological, and economic reasons: kin-selection predicts that the implied higher genetic relatedness increases altruistic behavior towards kin, kin-marriages decrease interaction with and therefore trust in outsiders, and they change economic incentives: supporting one’s nieces and nephews simultaneously benefits the prospective spouses of one’s own children. More importantly, though, in the absence of a supra-level inclusive institutions [sic], the family provides protection and insurance creating a stable equilibrium where individual deviation from loyalty demands is costly. Excessive reliance on the family, nepotism, and other contingencies of strong extended kin-groups in turn impede social cohesion and the formation of states with inclusive institutions.

In line with Acemoglu, Johnson, Robinson and Yared’s notion of critical junctures this paper provides evidence that the Churches’ marriage regulations changed Europe’s social structure by pushing it away from a kin-based society, and paved the way for Europe’s special developmental path. The Churches’ marriage regulations – most prominently the banning of consanguineous marriages (“marriages of the same blood”) – were starting to be imposed in the early medieval ages. Backed by secular rulers, this ban was accompanied by severe punishment of transgressions and was very comprehensive – the Western Church at times prohibited marriages up to the seventh degree of relatedness (that is, marriage between two people sharing one of their 128 great-great-great-great-great-grandparents). Clearly it was impossible to trace and enforce the ban to this degree, yet it demonstrates its severity. The eastern Church also banned cousin-marriage but never to the same extent (providing variation within Christian countries). [3]

I remember reading an anecdote from Jared Diamond a while ago, and I can’t remember whether it was in an interview or in one of his books (I wish I could find it). He was describing how someone in the village in Papua New Guinea where he was staying wanted to open an ice-cream shop to bring to glory of ice cream to the rest of the village. But this fellow ran into a small problem. In small villages in tribal New Guinea, everyone is basically related to everyone else in some way, however remote. When the budding entrepreneur tried to charge his cousins for an ice cream cone, they reacted with indignity. Charging your relatives for something was considered a severe faux pas! The village was still primarily a reciprocal gift economy. They simply could not get their heads around their concept that they had to pay for stuff. In the end, he could either make a profit or alienate everyone else in the village whom he depended upon. The ice cream shop folded.

Why did the church do this? The authors speculate that it may have been less about scripture, and more self-serving:

…the church’s focus on marriage proscriptions rose to the level of obsession. “They came to the view that marrying and having sex with these relatives, even if they were cousins, was something like sibling incest in that it made God angry,” he says. “And things like plagues were explained as a consequence of God’s dissent.”

The taboo against cousin marriage might have helped the church grow, adds Jonathan Schulz, an assistant professor of economics at George Mason University and first author of the paper. “For example,” he says, “it is easier to convert people once you get rid of ancestral gods. And the way to get rid of ancestral gods is to get rid of their foundation: family organization along lineages and the tracing of ancestral descent.”

Western Individualism Arose form the Incest Taboo (Scientific American)

While the Hajnal line was discovered back in 1965, it was unknown why marriage was so different west of the line than east of it until a 1983 book by Jack Goody called “The Development of the Family and Marriage in Europe.” Goody was an anthropologist who specialized in marriage customs and inheritance patterns around the world—things like dowries, bridewealth, primogeniture, partiable inheritance, etc. From his study of Medieval Europe, following Hajnal’s discoveries, he was the first to put forward the idea that the Catholic Church’s prohibitions were the critical factor in the demise of the tribal structures and the subsequent rise of Western individualism. This is from Fukuyama’s Origins of Political Order:

Goody notes that the distinctive Western European marriage pattern began to branch off from the dominant Mediterranean pattern by the end of the Roman Empire, The Mediterranean pattern, which included the Roman gens, was strongly agnatic or patrilineal, leading to the segmentary organization of society. The agantic group tended to be endogamous, with some preference for cross-cousin marriage. There was a strict separation of the sexes and little opportunity for the women to own property or participate in the public sphere. The Western European pattern was different in all these respects: inheritance was bilateral; cross-cousin marriage was banned and exogamy promoted; and women had greater rights to property and participation in public events.

The shift was driven by the Catholic church, which took a strong stand against four practices: marriages between close kin, marriages to the widows of dead relatives (the so-called levirate), the adoption of children, and divorce. The Venerable Bede, reporting on the efforts of Pope Gregory I to convert the pagan Anglo-Saxons to Christianity in the sixth century, notes how Gregory explicitly condemned the tribe’s practices of marriage to close relatives and the levirate. Later church edicts forbade concubinage, and promoted an indissoluble, monogamous lifetime marriage bond between men and women.

…The reason that the church took this stand, in Goody’s view, had much more to do with the material interests of the church than with theology. Cross-cousin marriage (or any other form of marriage between close relatives), the levirate, concubinage, adoption, and divorce are a what he labels “strategies of kinship” whereby kinship groups are able to keep property under the group’s control as it is passed down from one generation to another….the church systematically cut off all available avenues had for passing down property to descendants. At the same time, it strongly promoted voluntary donations of land and property to itself. The church stood to benefit materially from an increasing pool of property-owning Christians who died without heirs.

The relatively high status of women in Western Europe was an accidental by-product of the church’s self-interest. The church made it difficult for a widow to remarry within the family group an thereby reconvey her property back to the tribe, so she had to own the property herself. A woman’s right to own property and dispose of it as she wished stood to benefit the church, since it provided a large source of donations from childless widows and spinsters. And the woman’s right to own property spelled the death knell of agantic lineages, by undermining the principle of unilateral descent.

The Catholic church did very well financially in the centuries following these changes in the rules…By the end of the seventh century, one-third of the productive land in France was in ecclesiastical hands, between the eighth and ninth centuries, church holdings in northern France, the German lands, and Italy doubled….The church thus found itself a large property owner, running large manors and overseeing the economic production of serfs throughout Europe. This helped the church in its mission of feeding the hungry and caring for the sick, and it also made possible a vast expansion of the priesthood, monasteries, and convents. But it also necessitated the evolution of an internal managerial hierarchy and set of rules within the church itself that made it an independent political player in medieval politics. [4]

Despite all this, it remained just a speculative hypothesis, and remained unproven. What Henrich et. al’s paper does is amalgamate a large amount of interdisciplinary data to try and back up the hypothesis. Their idea is that such prohibitions wold have altered the cultural behavior of those societies relative to the ones around them, and that cultural behavior can be detected through things like church records, the use of intermediary financial instruments, the frequency of blood donations, and even unpaid parking fines. By establishing a correlation between Church exposure and these sorts of socio-cultural behaviors, they argue, we can see the roots of the cultural differences between the rest of the world, and what they term WEIRD cultures: Western, Educated, Industrial, Rich, and Democratic.

In the course of their research, Henrich and his colleagues created a database and calculated “the duration of exposure” to the Western church for every country in the world, as well as 440 “subnational European regions.” They then tested their predictions about the influence of the church at three levels: globally, at the national scale; regionally, within European countries; and among the adult children of immigrants in Europe from countries with varying degrees of exposure to the church.

In their comparison of kin-based and church-influenced populations, Henrich and his colleagues identified significant differences in everything from the frequency of blood donations to the use of checks (instead of cash) and the results of classic psychology tests—such as the passenger’s dilemma scenario, which elicits attitudes about telling a lie to help a friend. They even looked at the number of unpaid parking tickets accumulated by delegates to the United Nations…In their analysis of those tickets, the researchers found that over the course of one year, diplomats from countries with higher levels of “kinship intensity”—the prevalence of clans and very tight families in a society—had many more unpaid parking tickets than those from countries without such history.

The West itself is not uniform in kinship intensity. Working with cousin-marriage data from 92 provinces in Italy (derived from church records of requests for dispensations to allow the marriages), the researchers write, they found that “Italians from provinces with higher rates of cousin marriage take more loans from family and friends (instead of from banks), use fewer checks (preferring cash), and keep more of their wealth in cash instead of in banks, stocks, or other financial assets.” They were also observed to make fewer voluntary, unpaid blood donations.

Western Individualism Arose form the Incest Taboo (Scientific American)

This builds on Henrich’s previous finding that such WEIRD cultures score differently on certain psychological tests than people in cultures in the rest of the world. That paper was a widely-cited bombshell. For years, psychology studies confined themselves to Western Europeans, particularly undergraduate college students where the studies were carried out. It was simply assumed that people thought pretty much the same way everywhere, and therefore Western college students could safely be used as a stand in for humans more generally.

Henrich, an anthropologist, took those studies and gave them to people from diverse tribal peoples around the world, which, remarkably, hadn’t been done before. The results he got indicated that using Westerners—particularly rich, well-educated ones—as stand-ins for the entire human race in psychological tests was fundamentally flawed. We are, in fact, outliers when it comes to human behavior. This has profound implications for economics and sociology.

The weirdest people in the world? (PDF)

If Westerners really are different, then why is that? This paper attempts to answer the question.

Kinship vs. Capitalism

Both Max Weber and Karl Marx realized that the destruction of large corporate kinship groups and the separation between the household and the market economy were the prerequisites for later capitalist production. Both traced this change to sometime between the sixteenth and the nineteenth centuries. Weber focused on the culture of Protestantism as the cause, while Marx focused on the changing methods of economic production during the time period, such as Enclosure movement and subsequent explosion of rootless wage laborers. Weber’s ideas were later expanded by sociologist Talcott Parsons. Karl Polanyi also traced the change to Market Society from householding economies and cottage industries to this time frame.

However, a very influential book called “The Origins of English Individualism,” by Alan Macfarlane argued that England was basically an individualistic culture by 1250—long before its continental neighbors.

By shifting the origins of capitalism well before the Black Death, we alter the nature of a number of other problems. One of these is the origin of modern individualism.Those who have written on the subject have always accepted the Marx-Weber chronology. For example, David Riesman assume that modern individualism emerged out of an older collectivist, “traditional-directed” society, in the fifteenth and sixteenth centuries. Its growth was directly related to the Reformation, Renaissance and the break-up of the old feudal world. The “inner-directed” stage of intense individualism occurred in the period between the sixteenth and nineteenth centuries. Though a recent general survey of historical and philosophical writing on individualism concedes that some of the roots lie deep in classical and biblical times and also in medieval mysticism, still, in general, it stresses the Renaissance, Reformation and the Enlightenment as the period of great transition. Many of the strands of political religious, ethical, economic and other types of individualism are traced back to Hobbes, Luther, Calvin and other post-1500 writers.

Yet, if the present thesis is correct, individualism in economic and social life is much older than this in England. In fact within the recorded period covered by our documents, it is not possible to find a time when an Englishman did not stand alone. Symbolized and shaped by his ego-centered kinship system, he stood in the center of the world. This means that it is no longer possible to “explain” the origins of English individualism in terms of either Protestantism, population change, the development of a market economy at the end of the middle ages, or other factors suggested by the writers cited. Individualism, however defined, predates sixteenth-century changes and can be said to shape them all. The explanation must lie elsewhere, but will remain obscure until we trace the origins back even further than attempted in this work. [5]

Macfarlane claims that already by the thirteenth century, the evidence indicates that England was no longer what he terms a “peasant society,” or what we’ve been referring to as a “traditional society.” Even way back then, he says, England had many characteristics in common with later capitalist societies than with more traditional ones: freeholding of land, wage labor, free choice of marriage partners, individual inheritance, alienable property, geographical and social mobility, and so forth. From a review of the book:

The bulk of this short book is taken up by attempting to demonstrate that the characteristics of peasant society did not apply to England from the thirteenth century onward…In peasant societies land is not individualized but is held by the entire family through time and is seldom sold, since it is greatly revered; in England from the twelfth century onward, land was held by individuals (both men and women) and was often sold to nonfamily members, especially since geographical mobility of families was high and since children were sometimes disinherited.

In peasant societies the unit of ownership (the joint family) is also the unit of production and consumption; in England at the time the nuclear family (rather than the stem or joint family) was predominant, and the children often worked as servants for other families, rather than for their own families.

In peasant society, the families are economically almost self-sufficient, production for the market is small, and cash is scarce; in England at that time, the economy was highly monetized, agricultural production for the market was important, and the existence of elaborate books of accounts of farms attest to their “rational” attitudes toward money making (there was even money-lending for interest in rural areas).

In peasant societies there is a certain income and social equality between families that work on the land and a large gap in income and social status stands between them and other social groups, so that little mobility occurs between classes; in England at that time, considerable differentiation of wealth among the rural workers could be found and, in addition, some mobility between classes occurred.

Finally, in peasant societies women have a low age of marriage, their marriage partners are selected for them, and few remain unmarried; in England at that time, women apparently had a moderate age of marriage, selected their own partners, and, in many cases, did not marry at all. [6]

What this means is that the sociologists and economic historians who use England as the exemplar of a transition from a feudal, peasant society to a capitalist one are looking in the wrong time period! the transition took place long before the time they were examining, as Macfarlane explains:

…if we are correct in arguing that the English now have roughly the same family system as they had in about 1250, the arguments concerning kinship and marriage as a reflection of economic change become weaker. To have survived the Black Death, the Reformation, the Civil War, the move to the factories and the cities, the system must have been fairly durable and flexible. Indeed, it could be argued that it was its extreme individualism, the simplest form of molecular structure, which enabled it to survive and allowed society to change. Furthermore, if the family system pre-existed, rather than followed on industrialization, the causal link may have to be reversed, with industrialization as a consequence, rather than a cause, of the basic nature of the family. [7]

Macfarlane’s book did not answer the question as to why the English were so different from the rest of the continent (for additional criticism, see this [PDF]). However, beginning with Goody’s book, attention became focused on the efforts of the Catholic Church to break up kin groups in Anglo-Saxon England. This may have been where the practice began, as Henrich noted in a 2016 interview with Tyler Cowen:

When the church first began to spread its marriage-and-family program where it would dissolve all these complex kinship groups, it altered marriage. So it ended polygyny, it ended cousin marriage, which stopped the kind of . . . forced people to marry further away, which would build contacts between larger groups. That actually starts in 600 in Kent, Anglo-Saxon Kent.

Missionaries then spread out into Holland and northern France and places like that. At least in terms of timing, the marriage-and-family program gets its start in southern England.

Joseph Henrich on Cultural Evolution, WEIRD Societies, and Life Among Two Strange Tribes (Conversation with Tyler)

This might explain why Anglo-Saxon culture is so manifestly different than other cultures, with its emphasis on individualism, hustling, shallow social ties, and “making your own way.” This was further cemented by the fact that England was conquered by a foreign people in 1066—the Normans—who inserted themselves as a new ruling strata above the local lords in the prevailing feudal system. As one of my readers pointed out, the Normans had contempt for those beneath them, so much so that they didn’t even bother to learn the local language of those they ruled over. The “Norman yoke” might be another ingredient in the origins of English attitudes toward individualism. As Brad DeLong put it, “The society of England becomes more unequal because William the Bastard from Normandy and his thugs with spears—300 families, plus their retainers—kill King Harold Godwinson, and declare that everyone in England owes him and his retainers 1/3 of their crop.” And besides, with such a hodgepodge of peoples—Normans entering an already multicultural society of Angles, Saxons, Jutes, Danes, various Celts, and so forth—it’s hard to see how a tribal society could have persisted without strict prohibitions against intermarriage in any event on one small island given the circumstances (for example, Japan has a similar lack of tribes, except for minorities like the Ainu people).

The feudal system, with its emphasis on contractual obligations, was itself a substitute for the tribal solidarity that by that time had already been eroded. Henry Maine argued that feudalism was an amalgamation of earlier tribal customs with imported Roman legal systems of voluntary contract:

Feudalism…was a compound of archaic barbarian usage with Roman law…A Fief was an organically complete brotherhood of associates whose proprietary and personal rights were inextricably blended together. It had much in common with an Indian Village Community and much in common with a Highland clan. But…the earliest feudal communities were neither bound together by mere sentiment nor recruited by a fiction. The tie which united them was Contract, and they obtained new associates by contracting with them...The lord had many of the characteristics of a patriarchal chieftain, but his prerogative was limited by a variety of settled customs traceable to the express conditions which had been agreed upon when the infeudation took place.

Hence flow the chief differences which forbid us to class the feudal societies with true archaic communities. They were much more durable and much more various…more durable, because express rules are less destructible than instinctive habits, and more various, because the contracts on which they were founded were adjusted to the minutest circumstances and wishes of the persons who surrendered or granted away their lands.

The medieval historian Mark Bloch also noted that feudalism was a substitute for earlier social ties which had been abandoned:

Yet to the individual, threatened by the numerous dangers bred by an atmosphere of violence, the kinship group did not seem to offer adequate protections, even in the first feudal age. In the form in which it then existed, it was too vague and variable in its outlines, too deeply undermined by the duality of descent by male and female lines. That is why men were obliged to seek or accept other ties. On this point history is decisive, for the only regions in which powerful agnatic groups survived–German lands on the shores of the North Sea, Celtic districts of the British Isles–knew nothing of vassalage, the fief and the manor. The tie of kinship was one of the essential elements of feudal society; its relative weakness explains why there was feudalism at all. [8]

I should note that medieval guilds were also a response to this need for security; some historians of guilds trace their ancestry back to frith gilds, which were brotherhoods explicitly established for protection and defense.

And so, a society governed by explicit contracts and legal institutions centered around individuals became the norm in Western Europe far before the rest of the world. In the patchwork quilt of post-Roman Europe, some areas escaped infeudation altogether and retained elements of older, more traditional social orders. It was these remote communities that were studied in the late nineteenth century in order to uncover the lost world of Europe’s past tribal organization (for example, in Laveleye’s Primitive Property [9]). In other parts of Europe, feudal contracts took a myriad of alternative forms as Maine noted above—so much so that medieval historians today dislike even using the term feudalism to describe the political arrangements of this time period, because the contracts themselves were so varied. They often note that what we call feudalism was hardly one monolithic system. But it does seem as though the specific arrangements of feudalism from country to country determined the subsequent and divergent paths that various Western European countries would take. In a paper entitled,  “English Feudalism and the Origins of Capitalism,” political scientist George Comninel argues that the specifics of English feudalism allowed capitalism to develop there, rather than in neighboring France:

The specific historical basis for the development of capitalism in England- and not in France – is ultimately to be traced to the unique structure of English manorial lordship. It is the absence from English lordship of the seigneurie banale – the political form of parcellised sovereignty which was central to the development of Continental feudalism – that can be seen to account for the peculiarly ‘economic’ turn taken in the development of English class relations of surplus extraction. The juridical and economic social relations necessary for capitalism were forged in the crucible of a peculiarly English form of feudal class society.

In France, by contrast, the distinctly political tenor of social development – visible in the rise of the absolutist state, in the intensely political character of the social conflict of the Revolution, and as late as the massively bureaucratic Bonapartist state of the Second Empire – can be traced just as specifically to the centrality of seigneurie banale in the fundamental relations of feudalism.

The effects flowing from this initial basic difference in feudal relations include: the unique differentiation of freehold and customary tenures among English peasants, in contrast to the survival of allodial land alongside censive tenures of France; the unique development of English common law, rooted in the land, in contrast to the Continental revival of Roman law, based on trade; the unique commoner status of English manorial lords, in contrast to the Continental nobility; and, most dramatically, in the unique enclosure movement by which England ceased to be a peasant society – ceased even to have peasants – before the advent of industrial capitalism, in stark contrast with other European societies. [10]

Final Notes

I’ve banged on for too long already, so I’m just going to close with a few notes.

Unfortunately, many of the ideas I’ve written about above have been largely discussed in the context of white supremacy and racialism, and this research will give succor to those who believe that the “white race” is unique and therefore superior to all other people on earth.

I don’t think that’s the intent of the paper at all, although I am a little disturbed by the associations with George Mason University—the epicenter of the Koch Brothers’ takeover of a wide swath of economics. However, I’ll give them the benefit of the doubt for now.

While the racialist and HBD moments online are determined to reduce everything to genes (in a perverse inverse of blank slatism), it seems to me that these are cultural developments more than anything else, and are worth studying.

The desire to have such cultural differences rooted in biology is mainly an attempt by the Reactionary Right to justify the course of history and reify the status quo. For example: why is Africa poor? It’s not because they have been—and continue to be—exploited by Western colonial powers, it’s because they are stupid. The flip side to that is, or course, that Europeans are naturally smarter and more pro-social, and this is baked into the genes, meaning that reform is unnecessary and impossible—it’s just “the way things are.” The Just World philosophy on the level of nations. It also rationalizes why immigration—no matter how limited—is bad, without admitting to pure racism. Rather, it’s “just science,” claim the HBD crowd, that Europeans are different and superior at the genetic level, and therefore must remain “pure and undiluted” in order to maintain Western civilization.

Suddenly, Conservatives Can’t Get Enough of Science (Arc magazine)

But I doubt that there is any genetic basis here. Yes, institutional and cultural beliefs are very persistent, and these are indeed barriers to “Westernizing” the rest of the world. But to put all of this down to genes without evidence—where is the gene for “clannishness?”—is just not scientific, it’s political: exactly what they accuse the “radical Left” of engaging in.

Finally, I’ll just note that the places where kinship groups were broken up the earliest seem to have the highest rates of depression, suicide, and mental illness to this day, while those parts of the world that retained embedded human relationships—although significantly poorer—seem to be far happier and more content with life. It forces one to contemplate what the ultimate purpose of “progress” really is.

[1] Jonathan F. Schultz, Why Europe? The Church, Kin-Networks and Institutional Development (PDF), p. 5

[2] van Leeuwen, et. al., Marriage Choices and Class Boundaries: Social Endogamy in History, p. 9

[3] Jonathan F. Schultz, Why Europe? The Church, Kin-Networks and Institutional Development (PDF), pp. 2-3

[4] Francis Fukuyama, The Origins of Political Order, pp. 236-239

[5] Alan Macfarlane, The Origins of English Individualism: Some Surprises, p. 269

[6] Review Of “The Origins Of English Individualism: The Family, Review Of “The Origins Of English Individualism: The Family, Property And Social Transition” By A. Macfarlane Property And Social Transition” By A. Macfarlane (PDF)

[7] Alan Macfarlane, The Origins of English Individualism: Some Surprises, pp. 270-271

[8] Quoted in Fukuyama, p. 236

[9] For example, Primitive Property, Chapter XV, p. 212:

Emile Souvestre, in his work on Finisterre, mentions the existence of agrarian communities in Brittany. He says it is not uncommon to find farms there, cultivated by several families associated together. He states that they live peacefully and prosperously, though there is no written agreement to define the shares and rights of associates. According to the account of the Abbé Delalandre, in the small islands of Hœdic and Houat, situated not far from Belle Isle, the inhabitants live in community. The soil is not divided into separate properties. All labour for the general interest, and live on the fruits of their collective industry. The curé is the head of the community; but in the case of important resolutions, he is assisted by a council composed of the twelve most respected of the older inhabitants. This system, if correctly described, presents one of the most archaic forms of primitive community.

[10] George C. Comninel, English Feudalism and the Development of Capitalism (PDF), pp. 4-5

Religion and the Birth of Liberalism

I want to talk about this article that I found a while back on Cato Unbound called The Trouble in Getting to Denmark. Denmark is the example given by Francis Fukuyama as the ideal modern, peaceful Western Liberal democratic state. Inconveniently for the Cato Institute, it also has one of the most generous social safety nets in the world.

[Tangentially: Cato is all about promoting economic “freedom,” and Denmark is one of the freest and most entrepreneurial societies in the world. But it’s that way precisely because of its strong safety net and social democratic policies—policies that are being promoted by people like Bernie Sanders in the U.S. Also, see this: Never Trust the Cato Institute (Current Affairs)]

This post content centers around a new history book by Mark Koyama and Noel Johnson called Persecution and Toleration: The Long Road to Religious Freedom. The authors are both professors at George Mason University and are affiliated with the Mercatus Center, which on first blush might make them a little suspect. But there are some very good historical insights here, which are well worth a look. I’ll also quote extensively from this interview with Koyama by Patrick Wyman on the Tides of History podcast which covers the subject matter well. I’ve lightly altered some of the dialogue for clarity, quotes are from Koyama unless noted otherwise..

The book’s insights dovetail with what we’ve been talking about recently: the rise of the modern, liberal absolutist state. The thesis is that religious freedoms were basically the foundation for the rise of capital-L Liberalism—Liberalism being the idea of society as an assorted collection of solitary, self-directed individuals who must be free from any sort of predetermined social identity. Because this notion of has become the hegemonic assumption of the modern world, we fail to recognize just how novel it really is. So let’s dive in…

The main thesis is succinctly stated by Patrick Wyman near the beginning of the podcast:

“The rise of modern states, which were capable of enforcing general rules throughout their territory–down to the local level–were the precondition for religious peace and the eventual rise of religious and other freedoms, which we can term more broadly Liberal freedoms.”

Medieval European society gets the closest look, because it is out of these societies that the modern Liberal state develops, but many of the concepts and insights are applicable to other societies as well.

Religious Freedom versus religious tolerance

The book makes a very important point: religious freedom and religious tolerance are not the same thing; they are actually quite different. Most modern nation-states have true religious freedom, and most are founded on a secular basis (to the consternation of religious fundamentalists). Ancient states, however, practiced a form of religious tolerance, which was the toleration of minority religious beliefs, the same way you might tolerate your neighbor’s loud music instead of going over and starting a fight, or tolerate a screaming baby on a flight:

[3:18] Mark Koyama: “We attempt to project backwards our modern notions of what religious freedom is. In our modern language, we often use toleration interchangeably with religious freedom, where we describe toleration as an attitudinal thing–like ‘I’m a tolerant person; I don’t care what religion you have,’ as opposed to its original meaning, which was ‘to bear.’ This was a sufferance. We’re going to allow these Muslims, say, to practice their religion, but it’s not because we’re okay with it. It’s because it’s the best expedient or pragmatic response to religious diversity.”

[10:51] Patrick Wyman (host): “There’s a fundamental difference between religious sufferance and freedom. Between suffering something to happen because it’s necessary for you to run your state the way you want to, and actively embracing this thing as a legally-based ideal.”

I think that’s an important point. Ancient multi-ethnic states did not have true religious freedom. You will often find this asserted in various history books, but this is a misunderstanding. They had religious tolerance; that is, they permitted subcommunities to openly practice their religion. It was a sufferance, but they allowed it because it was better than the alternative.

This was a categorically different concept from religious freedom as we think about it today.

One example is the Roman Empire. All the Romans really wanted was to gain the spoils of their vast empire via tax collection and tribute. They often co-opted local rulers and other notables, who subsequently became “Romanized,” but they weren’t out to transform society. To that end, subjugated ethnic groups were allowed to maintain their cultural and religious practices, with a few stipulations. For most religions, this wasn’t a problem—they were flexible enough that they could accommodate some Roman gods in their practices and be more-or-less okay with it. The Jews, on the other hand, with their strident and uncompromising monotheism, were different. They regarded their God as the real one, and all others as idols, and worshiping idols was strictly forbidden. This is why there was so much tension in Judea, tension that ultimately led to several revolts and wars.

This was a time where religious identity was not separate from cultural or ethnic identity. The rise of doctrinal evangelical religions changed all that. You can be an Arab, a Turk, a Persian, or Balinese and also be a Muslim. You can be Irish, Polish, French, Italian, or Nigerian and be a Catholic. That’s a much more modern-day conception of religion—as a creed freely chosen. But in ancient societies, religion was an essential and inseparable part of shared cultural identity.

In our reading of the historical evidence, neither ancient Rome nor the Islamic or Mongol Empires had religious freedom. They often refrained from actively persecuting religious minorities, but they were also ruthless in suppressing dissent when it suited their political goals. Religious freedom is a uniquely liberal achievement, and liberalism is an achievement of post-1700 modernity. What explains it?

Which raises the second major point of the book.

Identity Rules versus General Rules

For me, the biggest takeaway was the difference between identity rules and general rules.

[6:25] “An identity rule is where the content or enforcement of the law depends on the social identity of the individual involved. In contrast, a general rule is a rule where the content or enforcement of the law is independent of that individual’s relevant social identity…The identity rules could privilege a minority, or it could disadvantage them. They key here is that your social identity is determinative.

They actually distinguish three different types of rules: personal rules, identity rules and general rules. Personal rules are targeted to the specific person who commuted the infraction, and are largely ad-hoc. This works well on the local level, where everybody knows everybody else such as a small self-governing village, but it doesn’t scale up.

When large empires came on the scene, they imposed identity rules, where law enforcement was based largely on one’s group identity. The reason they did this is because ancient states had limited capacity to govern at a local level i.e. low state capacity. The sophisticated legal systems we have today—with their courts, police, bailiffs, jails, attorneys and professional judges—simply didn’t exist. The capacity simply wasn’t there. Plus, the very notion of an individual as having an identity wholly separate and unmoored from the larger group to which he or she belonged was much less common in the ancient world than in our modern one. That is, ancient societies were collectivist by default. And so, rules were based on one’s ascribed group identity: one’s clan affiliation, social status, guild, corporate group, religion, etc.

With the shift to settled agriculture after 8,000 BC, political organizations became larger and states oversaw the introduction of more sophisticated legal systems to prevent theft, fraud, and uncontrolled violence. For most of history, and in much of the developing world today, these laws have taken the form of identity rules.

Identity rules depend on the social identity of the parties involved. This could refer to an individual’s clan, caste, class, religious affiliation, or ethnicity. Examples from historical legal systems abound. Aristocrats faced different rules from commoners. Slaves faced different rules from freemen. The Code of Hammurabi, for example, prescribed punishment based on the relative status of the perpetrator and the victim. Identity rules were common historically because governing individuals on the basis of their legible social characteristics was cheap. As religious identity was particularly salient, many identity rules treated individuals differently on the basis of their religion.

The Trouble in Getting to Denmark (Cato Unbound)

This is something I’ve repeatedly tried to emphasize in my writing: when we talk about “states,” or things like “the rise of the state” in ancient history, we’re talking about something qualitatively different than when we use term “state” today. That’s important to keep in mind.

[9:24] “The nature of pre-modern states is that, because of they way they govern, they have to rely on identity rules. They don’t have the ability or the capacity to govern at a very local level. They can’t extend their reach deep into society. So they’re more likely to say to this community: ‘we’re going to delegate to you a lot of authority; a lot of power.’ Even if they wanted to enforce a general rule, they wouldn’t be able to.”

“To take another pre-modern example, if you look at the Ottoman state throughout its history, it’s seen as an absolutist state where the Sultan has all the power. But it’s such a vast empire that, given how primitive communication technologies are, its inevitably decentralized, and power is delegated to local nobles. And that means that religious minorities like Christians and Jews get quite a lot of autonomy; a lot of independence, because the state just can’t govern them directly.”

“So the local religious leaders will get quite a lot of autonomy, and a lot of ‘freedom’ precisely because the state governs through identity rules, not through general rules. This results in a lot of self governance for religious minorities. But the key point is that that religious self-governance should not be mistaken for religious freedom. Nor should a state like the Ottoman state, which delegated power and gives autonomy to religious communities, be mistaken for a liberal state. That shouldn’t be mistaken for an example of religious freedom or liberalism.”

Religious Legitimization

The rulers of ancient states relied primarily on religion to legitimize their rule. This seems to stem back very far, indeed. A careful reading of, for example, The Creation of Inequality by Flannery and Marcus, leads to the conclusion that all of the earliest ruling classes everywhere claimed some sort of special connection to the divine entities that were the object of collective reverence. Sometimes this was the “King as a god” model of ancient Egypt. Sometimes this was the “Ruler as steward” model, as in ancient Mesopotamia. Sometimes it was “sacral kingship,” with the ruler as high priest. Sometimes it was tribal elders or scribes who “interpreted God’s will”. Much later, it was the “Divine Right of Kings.” But religion seems to have played a role in virtually all cases that we know of.

If identity rules were a “cheap” form of enumerating and enforcing laws in low-tech, multi-ethnic societies, then appealing to religion was a “cheap” way for rulers to claim legitimacy in these types of societies. It was also crucial to the creation of coherent group identities, which were necessary for identity rules to function. Often it involved special treatment for clergymen, or some sort of power-sharing accommodation with religious officials. But that also led to fairly weak states, with little power to expand the rulers’ prerogatives.

Religion was so central to premodern societies that it is difficult to fully understand the transformations associated with modernity without attending to it. Religion was used to justify the categories in which government and society more broadly used to structure everyday life. Women versus men, nobles versus commoners, guild members versus non-guild members, Muslims versus Christians, Christians versus Jews. All of these categories—as well as the different statuses associated with them in law and in culture—relied to a varying degree on religion to legitimize their use.

Religion was an especially important component of identity in the large agrarian civilizations of Europe and the Near East in a time before nationalism and nation states. Shared religious beliefs and religious identities were seen as crucial to maintaining social order. Religious differences were extremely destabilizing because they were associated with a host of deep societal cleavages.

In an environment where a common religious identity undergirded not only the institutions of the church, but also those of the state and civil society, both religious freedom specifically, and liberalism more generally, were unthinkable.

For instance, in medieval and early modern Europe oaths sworn before God played an important role in upholding the social order. These were thought so important that atheists were seen as outside the political community, since as John Locke put it, “promises, covenants, and oaths, which are the bonds of human society, can have no hold upon an atheist.”

A shared religious identity was also crucial for guild membership. Guilds in Christian Spain excluded Muslims. Guilds in 14th century Tallinn excluded Orthodox Christians. Jews were excluded almost everywhere. In parts of Europe converts from Judaism and even their descendants or remote relations could not be guild members. In a world governed by identity rules, an individual’s religious identity determined what economic activities were open to them.

The Trouble in Getting to Denmark (Cato Unbound)

Identity rules were even relied upon by rulers to raise revenue. For example, in many ancient empires, taxes were collected at the village level, with the collection delegated to local elders. Taxes might be assessed differently depending on the group in question. Merchants might be taxed differently than farmers, for example, and often times nobles weren’t taxed at all! Different ethnic groups might face different levels of responsibility and taxation. For example, Jews were the only group allowed to lend money at interest in Catholic Europe, so they were frequently used as cash cows by Christian rulers:

As an illustration, consider how early modern governments often used Jewish communities as a source of tax revenue. Usury restrictions made lending by Christians very costly. However, rulers could grant monopoly rights to Jews to lend without violating their religious principles. In turn, the rates of interest charged by Jewish lenders were high, and the profits were taxed away by the very rulers who granted these rights. Finally, the specialization of Jews as moneylenders exacerbated preexisting antisemitism among the Christian population. This in turn made it relatively easy for rulers to threaten Jews in case they didn’t intend to pay up.

So long as rulers relied on Jewish moneylending as a source of revenue, Jews were trapped in this vulnerable situation. Their position could improve only when states developed more sophisticated systems of taxation and credit.

As suggested by the above example, low state capacity and a reliance on identity rules are self-reinforcing. States that rely on identity rules face less incentive to invest in the fiscal and legal institutions that would increase state capacity. This, in turn, makes them more reliant on identity rules and less able to enforce general rules.

Social Equilibrium

Low state capacity, identity rules and religious legitimization all combined and interacted with each other to form a self-reinforcing social equilibrium, argue Koyama and Johnson.

What is a self-reinforcing equilibrium? This is a tricky one. It’s a concept developed by a Stanford political scientist named Avner Greif. He distinguishes between “institutions as rules” and “institutions as equilibria”. The following is my interpretation, such as I can make out:

Rules as institutions is just what it says—it looks at what the rules of the game are, and how they developed over time. Rules are prescriptive, and are set and enforced from above. They change very slowly.

Rules as equilibria is a concept developed from game theory. In this conception, rules are an emergent phenomena from consistent, repeated interactions between groups of people. There is no overall enforcer, rather the rules develop through “playing the game” over and over again. Consequently, rules as equilibria are more likely to develop out of repeated voluntary interactions between groups rather than individuals, and are enforced by intra-group norms rather than an all-powerful “referee” overseeing everything. The rules of the game are not static; they develop as time goes on. This approach emphasizes the incentives and motivations of the groups which are interacting.

In the institutions-as-rules approach, rules are institutions and institutions are rules. Rules prescribe behavior. In the institutions-as-equilibria approach, the role of “rules”, like that of other social constructs, is to coordinate behavior. The core idea in the institutions-as-equilibria approach is that it is ultimately the behavior and the expected behavior of others rather than prescriptive rules of behavior that induce people to behave (or not to behave) in a particular way. The aggregated expected behavior of all the individuals in society, which is beyond any one individual’s control, constitutes and creates a structure that influences each individual’s behavior. A social situation is ‘institutionalized’ when this structure motivates each individual to follow a regularity of behavior in that social situation and to act in a manner contributing to the perpetuation of that structure.

Institutions: Rules or Equilibria? (PDF)

An example he gives is the merchant guilds of the Middle Ages:

For example, at the medieval Champagne Fairs, large numbers of merchants from all over Europe congregated to trade. Merchants from different localities entered into contracts, including contracts for future delivery, that required enforcement over time. There was no state to enforce these contracts, and the large number of merchants as well as their geographic dispersion made an informal reputation mechanism infeasible…impersonal exchange was supported by a “community responsibility system”. Traders were not atomized individuals, but belonged to pre-existing communities with distinct identities and strong internal governance mechanisms.

Although particular traders from each community may have dealt with merchants from another community only infrequently, each community contained many merchants, so there was an ongoing trading relationship between the communities, taken as a whole. Merchants from different communities were able to trust each other, even in one-shot transactions, by leveraging the inter-community “trust” which sustained these interactions. If a member of one community cheated someone from another community, the community as a whole was punished for the transgression, and the community could then use its own internal enforcement institutions to punish the individual who had cheated.

This system was self-enforcing. Traders had an incentive to learn about the community identities of their trading partners, and to establish their own identities so that they could be trusted. The communities had an incentive to protect the rights of foreign traders, and to punish their members for cheating outsiders, so as to safeguard the valuable inter-community trade. Communities also developed formal institutions to supplement the informal reputation mechanism and coordinate expectations. For example, each community established organizations that enabled members of other communities to verify the identity of its members.

Ultimately, the growth of trade that this institution enabled created the impetus for its eventual replacement by more formal public-order (state-based) institutions which could directly punish traders by, for example, jailing them or seizing their property.

Thus, we see the importance of group identity and solidarity in establishing and enforcing social norms in a world where centralized institutions (e.g. states) are very weak. Without a powerful state, there is simply no way to enforce norms out of a group of isolated, atomized individuals whose identity is completely self-chosen. But membership in various sodalities makes it possible. If you were a bad merchant who cheated or welshed on your debts, you wouldn’t be a merchant very long, even without an all-powerful state enforcing contracts from above. Your reputation, and your relationship with the group, was paramount.

The authors also make a distinction made between equilibria which are stable, and equibria which are self-undermining.

[11:06] PW: “You talk a lot about political legitimacy, about what allows rulers to rule without the constant threat of political violence, of coercive violence. And so you get at the concept of self-reinforcing equilibrium—that this is how medieval society functioned. In your conception, you have religious legitimacy—legitimacy given to a ruler by religious authorities—and identity rules, working together to generate a kind of political equilibrium.

MK: “In the Middle Ages we see widespread reliance on identity rules. Why? Well, for one reason is that even if a ruler was ambitious and had read Roman law and envisioned ruling on the basis of laws which were more general, less parochial, and less local, they wouldn’t have the ability to really enforce them. Ambitious medieval rulers lacked bureaucracies and standing armies, so they would be unable to overturn these rules and replace them with more general rules. So that’s one self-reinforcing relationship—the relationship between low state capacity and reliance on identity rules.

“The other aspect is the reliance on religion as a source of legitimacy. One reason why religion is valuable is because medieval rulers didn’t provide much in the way of public goods, beyond maybe defense; but even defense is questionable because often defense is actually offense. So they’re not providing education, they’re not providing welfare—that’s done by the Church. They’re not really regulating markets. They’re not doing much to alleviate famine or harvest failures. Where does their legitimacy come from, then?

“It’s because they’re the ‘Most Christian King,’ or the’ Catholic Monarch,’ or the ‘Defender of the Faith.’ Religion is a cheap way for rulers to get legitimacy. But if you’re using religion to get legitimacy, you’re making a deal with the religious authorities.

“So in the case of medieval Europe, you’re making a deal with the Church. What the deal entails might be things like: making Churchmen exempt from certain laws, or exempt from paying taxes, which was common in the medieval period. It might involve allowing the papacy to choose popes, or giving them political offices.”

“If you have low state capacity, religious legitimization is going to be an appealing strategy. But at the same time, the more you rely on religion or religious authorities to legitimate your rule, that’s going to curtail your power, your discretionary authority to build state capacity. So its’ a self-reinforcing relationship.

And so low state capacity, religious legitimization, and the application of identity rules, were all linked together in maintaining a stable equilibrium. Eventually, though, that equilibrium was disrupted.

Disrupting the equilibrium: The Reformation and the printing press

The Gutenberg printing press, expanding literacy, and the Protestant Reformation were all intimately connected, and provide a potent example of how technological change often drives social change, for better or for worse (a point worth attending to today).

Suddenly you have many more religious minorities, disrupting the old stable equilibrium. Perhaps even more significantly, you have religious minorities that are allied across national boundaries. This is something that did not really exist before.

[23:00] “John Calvin and Martin Luther didn’t want to secularize society or the state—anything but. They wanted to revitalize religion on different foundations. But the net result was something very different than what they intended…”

Large chunks of society that were once the concern of the Church are no longer the concern of the Church, at least in the Protestant territories. For example, in England the monasteries are sold off, and a lot of Church land is privatized, so a lot of functions that the Church was doing—like providing welfare to the poor–are no longer being provided in sixteenth-century England. That generates a crisis of beggars and paupers in Elizabethan England which the state eventually has to solve with the introduction of the Poor Law in the early seventeenth century.”

“In the German territories, it’s been shown by research that Protestantism leads to the selling off of Church buildings. Even in Catholic Europe, the Counter-Reformation is tightly controlled by powerful monarchies in Spain and France. And so the independent ability of the Church is weakened as a result. Similarly, the ability of identity rules and religious identity to effectively govern society is weakened where you have multiple religions in one society.

“So all of these societies which experience the Reformation wholeheartedly—France, the German territories, England—they generate religious minorities that they didn’t have before.”

“This is an ongoing problem. In England, the wars of religion destabilize the political economy for the entire period between Henry the Eighth and the Glorious Revolution. You’re always worrying whether the Catholics will somehow take control, or will turn England toward Rome. That generates the persecution of Catholics, and it generates conflict between Parliament and the King.”

“Germany is the most extreme example, because the Holy Roman Empire descends into a terrible war—the Thirty Years’ War—which is one of the worst wars in European history.”

Throughout this period of crisis, which lasts more than a century, European rulers want to return their societies to how they had been in the medieval period. They want to regain religious homogeneity, so they think they can reconcile the Protestants and the Catholics. It’s a common view in sixteenth-century France that if the king can bring everyone together, there will be a way to bring the Protestants back into the fold. We also have the policy of expulsion which is used not only in Spain and Portugal, but also in France at the end of the seventeenth century. You feel you can’t govern effectively so long as you have a group of people who belong to another religion, so you expel them.”

Because rulers are conditioned on this prior equilibrium, they don’t know how to deal with religious differences. And it takes basically a century-and-a-half of conflict, violence, and then accommodation before there’s a movement to reorient these societies along different rules. There’s what we recognize as a shift in political arrangements which de-emphasizes religion as a source of political legitimacy and shifts away from this reliance on identity rules towards more general rules. And, of course, this transition takes several centuries.”

They then discuss a concept called multivocal signaling. In an era of low information flow and primitive communication technologies, rulers could target alternative messages to different groups of subjects. Each message was tailored to that particular social group, and was designed to appease them and keep them in the fold. The rulers’ identity became a Rorschach ink blot designed to be interpreted many different ways by many different groups of people.

But once information became easier to disseminate and access, different groups could compare notes. Now it was no longer possible to be all things to all people; sort of like when a cheating man’s wife discovers that he has one or more secret other families. This concept is based on a book called The Struggle for Power in Early Modern Europe by political scientist Daniel H. Nexon:

[27:15] PW: “In the early modern period, especially with the rise of print and then the Reformation that follows, it gets a lot harder for rulers to be everything to all of their different groups of subjects–what Nexon terms multivocal signalling. Premodern rulers had done a lot of being one thing to one group of people in their kingdom, another thing to another group of people. So you could simultaneously be ‘Protector of the Jews’ and ‘Most Christian King,’ and this to the artisans, and this to the nobles. A ruler could be a lot of different things simultaneously because it was easy to target messages to those groups in the absence of mass media of communication.”

But when you get the rise of print and simultaneously the splintering of society along religious lines, it gets a lot harder to be everything to everybody, because everybody knows what you’re saying to everybody else, too. So it becomes much harder for rulers to maintain these split identities that allow them to govern heterogeneous societies effectively by means of these identity rules.”

“Maybe that’s a thing that helps explain the shift to general rules. When you can’t be everything to everybody, you need to find different bases of legitimacy and power on which to rule.

[28:33] MK: “…When we think about why religious persecution was so acute during that period—why do you have these wars of religion—the kind of trite, high school history view is how intolerant people were back then. Then we can look down on them from our modern liberal societies and say that people in the sixteenth century really believed in burning heretics alive, or killing people for religious differences.

But Daniel Nexon’s book really points out that because of the spread of print media, this religious crisis was really a geopolitical crisis, because Catholics in France and Spain were now interested in the fate of Catholics in England. So the Catholics in England then become a potential fifth column in the geopolitical struggle taking place for non-religious political reasons between England, France, and Spain. They’re aligned with the political interests of a foreign power. Ditto Protestants in France. Protestants in France are going to be aligned with the Dutch Republic, or with the German States or with England. So, again, a potential fifth column that the state no longer can trust.

Prior to the Reformation, there were religious differences across these European states. People would have their own local version of Catholicism. They would worship local saints and have local practices. But those local religious differences were not correlated in any way with political differences at the geopolitical level. The fact that you might have your own religious practices in Norfolk was not going to align you with the French. But by the seventeenth century, that is true for Catholics and Protestant minorities in their respective countries. So that’s another layer of this crisis that early modern rulers faced.

Nexon himself describes multivocal signaling this way:

Multivocal signaling enables central authorities to engage in divide-and-rule tactics without permanently alienating other political sites and thus eroding the continued viability or such strategies. To the “extent that local social relations and the demands of standardizing authorities contradict each other, polyvalent [or multivocal] performance becomes a valuable means of mediating between them” since actions can be “coded differently within the audiences.” Multivocal signaling, therefore, can allow central rulers to derive the divide-and-rule benefits of star-shaped political systems while avoiding the costs stemming from endemic cross pressures… The spread of reformation, in particular, made it difficult for dynasts to engage in polyvalent signaling across religiously differentiated audiences…

The Struggle for Power in Early Modern Europe: Religious Conflict, Dynastic Empires, and International Change; by Daniel H. Nexon, pp. 114-115

This also helps explain the emergence of nationalism and national identities in nineteenth-century Europe, and the demise of multi-ethnic states like the Austro-Hungarian empire. As the hand of the state reached ever deeper down into the underlying fabric of society during this period, people wanted to be directly ruled by people “like them” and not by “outsiders.” Ancient states, by contrast, did fairly little besides collecting taxes, guaranteeing safe travel, and keeping basic order, with underlying ethnic identities remaining mostly intact.

The Roman Empire, again, provides an example. You can’t look at a map of the Roman Empire at its height without pondering, “how could they govern such a vast territory without any modern technology?” The answer is: they didn’t! The empire was sort of a “stratum” above local communities whose day-to-day lives probably differed very little from those of their remote ancestors. The empire just provided an organizational framework, and little else. Even a standing army could only move as fast as a soldier could march, and communicate as fast as a horseman could ride. Rulers moved the army about strategically, like pieces on chessboard, in order to maintain order and quell revolt. Actual interaction with government officials, however, was limited to a small coterie of aristocratic local leaders. For most ordinary people in the ancient world, the “empire” they were nominally ruled by was just a remote abstraction. With the rise of strong, centralized states, that was no longer the case. Even today separatist conflicts abound, such as in Catalonia or Kurdistan.

The Emergence of general rules and modern Liberalism

And so we finally come to the introduction of general rules—rules that are written to treat everybody equally, regardless of their group identity, doctrinal creed, or any other ascribed social status. Whether you were Protestant, Catholic or Jew (or even atheist!), the law was the same. Of course, this was an ideal often not lived up to, but it started to become the common expectation. This eventually came about after every other approach was tried by Early Modern rulers and failed. It’s hard to win a win a war against a belief system. But what this approach also did was free up Early Modern rulers to expand state capacity in other ways that they could not have done before, and appeasing religious officials was no longer paramount. For example, Napoleon considered his law code to be his finest and most durable achievement, surpassing even his military victories. All sort of archaic and feudal rules were swept away.

Yet there were often many attempts from below to push back against this kind of governance, and hence there were significant roadblocks on the way to more modern systems of professional, bureaucratic governance, democracy, and the expansion of state capacity:

[31:15] We see endless attempts by Early Modern rulers to build state capacity, and they’re always being undermined at the local level…Every attempt by these Early Modern rulers to build state capacity is one step forwards, two steps backwards. There are these forces pushing back against any attempt to build a society based on general rules—what Francis Fukuyama calls the repatrimonilization of the state—and often it’s only in war that these modern states are forged. War is driving this increase in state capacity, but war is also destroying the economy and using up the lives of hundreds of thousands of individuals. That’s why its such an arduous process.

Some of these Early Modern rulers are heading towards more general rules and increased state capacity, others think the way forwards is actually backwards. The term historians use is confessionalization, and in some sense these confessional states that are built in the Early Modern period are trying to rebuild the medieval equilibrium. I think Louis the Fourteenth, what he’s doing when he expels the Huguenots—the French Protestants—is looking back to the golden age of how France was before the Reformation. He thinks if only he could get back and reunify the country religiously, that would actually strengthen his power and make the state stronger.

We know after the event that that’s a failure. It doesn’t strengthen the French economy or society, because they lose a very productive minority, but it also doesn’t work even on its own terms, because by the eighteenth century there are still many, many Protestants in France. It doesn’t get rid of the problem of a religious minority.

European rulers eventually had no other choice but to acquiesce to the freedom of religion as we now know it. Edicts of Toleration were signed all over Europe. The Founding Fathers of the United States—for whom the wars of religion were still recent history—recognized this and enshrined it in the Constitution. Its birth was much more painful in Europe, beginning with the often radical atheism of the leaders of the French Revolution. This kicked off the Long nineteenth century—the period of conflict where modern Liberalism was born.

With religious affiliation now being something “freely chosen” according to one’s own individual conscience, other forms of ascribed identity soon fell by the wayside. Free cities and communes had always been places for nonconformists in Medieval Europe to flee to in order to escape the stultifying conformity of the countryside and shed their traditional social obligations. These sophisticated, cosmopolitan urbanites—the bourgeoisie—became the nucleus of the new social order based around “freely chosen” social affiliations, flexible and ever-shifting personal identities, and explicit (as opposed to implicit) contractual obligations:

In our argument it was not that the Wars of Religion simply exhausted confessional and doctrinal disputes. Rather there was a transformation at the institutional level. The leading European states shifted away from identity rules towards more general rules. This shift was related to 19th-century historian Henry Sumner Maine’s discussion of the passage from status to contract: Status was imposed and ascriptive. Contracts, in contrast, are the outcome of voluntary choices. Status-based rules are invariably identity rules. Contracts provide the foundation for a system of general rules.

Moving from a fixed status to a contractual society helped set in motion a range of developments, including the growth of markets and a more extensive division of labor. But it had the unintended consequence of diminishing the political importance of religion, and this made liberalism feasible for the first time in history.

The Trouble in Getting to Denmark (Cato Unbound)

Wars played a major role in the emergence of modern states, particularly the need to raise ever-larger amounts of money to fund them. In our history of money, we saw how international merchants’ use of paper instruments of credit, such as bills of exchange, existed alongside the ruler’s legal authority to raise taxes and coin money. Bills of exchange and trade credit allowed these merchants to coordinate their activities across international boundaries. This was enforced not by the state, but by private networks of merchant-bankers (i.e. via rules of equilibrium). When the bankers’ ability to issue paper credit became conjoined with the state’s ability to levy taxes with establishment of the Bank of England, you had a major step forward toward the creation of the modern welfare-warfare state. The end of the Thirty Year’s War in the Peace of Westphalia led to the concept of what political historians refer to as Westphalian sovereignty—the basis of the soveriegn, absolutist nation-state. These developments, in turn, led to the establishment of a professional Weberian civil service, supplanting the patrimonial states governed by hereditary aristocrats, i.e “depatrimonialization”. Per Wikipedia:

[Max] Weber listed several preconditions for the emergence of bureaucracy, including an increase in the amount of space and population being administered, an increase in the complexity of the administrative tasks being carried out, and the existence of a monetary economy requiring a more efficient administrative system. Development of communication and transportation technologies make more efficient administration possible, and democratization and rationalization of culture results in demands for equal treatment.

As Karl Polanyi extensively documented, strong states, capable of enforcing general rules and contracts, and haute finance, were the key requirements in creation of Market Society. Market Society—where everything including land and labor was for sale and theoretically allocated according to impersonal forces of supply and demand—was not merely an expansion of the kind of activities that had gone on generations prior. Rather, it was something altogether new and radically different, and done with the full blessing of the elite ruling classes. Patrick Deneen notes the connection in his book, Why Liberalism Failed:

Individualism and statism advance together, always mutually supportive, and always at the expense of lived and vital relations that stand in contrast to both the starkness of the autonomous individual and the abstraction of our membership in the state. In distinct but related ways, the right and left cooperate in the expansion of both statism and individualism, although from different perspectives, using different means, and claiming different agendas. This deeper cooperation helps to explain how it has happened that contemporary liberal states–whether in Europe or America–have become simultaneously more statist, with ever more powers and authority vested in central authority, and more individualistic, with people becoming less associated and involved with such mediating institutions as voluntary associations, political parties, churches, communities, and even family. For both “liberals” and “conservatives,” the state becomes the main driver of individualism, while individualism becomes the main source of expanding power and authority of the state. p. 46

Our main political choices come down to which depersonalized mechanism will purportedly advance our freedom and security–the space of the market, which collects our billions upon billions of choices to provide for our wants and needs without demanding from us any specific thought or intention about the wants and needs of others, or the liberal state, which establishes depersonalized procedures and mechanisms for the wants and needs of others that remain insufficiently addressed by the market.

Thus the insistent demand that we choose between protection of individual liberty and expansion of state activity masks the true relation between the state and market: that they grow constantly and necessarily together. Statism enables individualism, individualism demands statism. For all the claims about electoral transformations–for “Hope and Change,” or “Making America Great Again”–two facts are naggingly apparent: modern liberalism proceeds by making us both more individualist and more statist. This is not because one party advances individualism without cutting back on statism while the other does the opposite; rather, both move simultaneously in tune with our deepest philosophic premises. p. 17

The authors display their Libertarian biases toward the end of the article with this line: “While the far left has never accepted liberal values such as freedom of expression and freedom of religion, antipathy towards liberal values is now evident in mainstream progressive publications as well. Liberalism is indicted because it is perceived as legitimating inequality and failing to endorse social justice.” Notice the lack of citations here.

A nice strawman, but liberalism is not indicted, capitalism is. Capitalism is inherently undemocratic, since it invests disproportionate power in an unelected minority capitalist class, whose power stems from paper ownership claims (in deeds, stocks bonds, and accounts) which can be passed down in perpetuity. As Deneen notes, in practice this simply replaces one aristocracy with another. And we all know that the rich can buy special treatment under the law due to their disproportionate wealth and influence in comparison with the rest of us, something which makes a mockery of so-called “liberal values.”

Also, under Neoliberalism repatrimonialization and rent-seeking have exploded. Monopolies and oligopolies control practically every major industry. The feckless rich are bailed out while ordinary citizens are left to their own devices. Prices have less to do with actual production costs than sheer market power, and rules are written and re-written by the industries themselves in order to privilege existing actors and keep out competitors (including governments themselves). Parasitic financial gambling has become the highest-return activity rather than providing useful goods and services. Incompetent cronies and family members take over key positions in the public and private sector. The upper class uses elite universities as a moat to maintain their elevated status, despite their demonstrated lack of judgement or competence.

Capitalism as it currently stands also commonly makes rules that favor certain groups over others. Professional classes like doctors, lawyers, engineers, and so forth, are shielded from international competition by government restrictions. Patent and copyright laws enforced by strong states prevent the copying of innovations by others, and preserve existing wealth distribution. Wealth is taxed more lightly than wages. Meanwhile, most average workers are left to “sink or swim” in a harsh, competitive globalized job market with no protections whatsoever. This is all rationalized as an “inevitable” force of nature. Dean Baker has written a whole book about it called Rigged:

Rigged: How globalization and the rules of the modern economy were structured to make the rich richer (Real World Economics Review)

In the end, the authors conclude, “[W]e think the core characteristics of a liberal society are the rule of law and reliance on general rules,” and, “Liberalism is valuable because it is the only form of social order we know of that is consistent with a high degree of autonomy and human dignity.”

Well, under that definition, socialism would fit the bill just as well, if not better. It’s hard to see a lot of “dignity” and “autonomy” with the amount of people struggling in modern-day America. It’s hard to equate the millions of prisoners in jail toiling away for pennies an hour with “dignity.” And it hard to have “autonomy” when the base condition of existence for most of us is having to constantly sell our labor or face utter ruin. Liberalism is—or should be—more than simply allowing the rich the “freedom” to make whatever rules they wish for their own benefit, to the detriment of society as a whole. If that doesn’t happen soon, then don’t expect Liberalism to last much longer.

Don’t Think Like an Economist

Here’s Tyler Cowen over at Marginal Revolution:

Larry Summers is my favorite liberal economist because even while maintaining his liberal values he never stops thinking like an economist. That makes him suspect among the left but it means that he is always worth listening to….

Summers on the Wealth Tax (Marginal Revolution)

No, that’s precisely what makes him NOT worth listening to (he’s—surprise, surprise!—opposed to the tax). Listening to arrogant Ivy League hyper-elite technocrats like Larry Summers is exactly why the Democratic Party is in the pathetic state its is in, and continually loses elections, even to incompetent morons like Donald Trump. If Larry Summers is a representation of “liberal values” than God help us all.

Summers was Obama’s economic advisor, the same Obama who refused to jail a single banker or financier for their role in the housing collapse, no matter how blatant their malfeasance. But the most telling example about how Larry Summers “never stops thinking like an economist” (a good thing in Cowen’s estimation) is the infamous Summers memorandum, where he argued that—according to economic logic—Africa was tragically underpolluted, and that needed to be rectified.

Summers, an enthusiast for the [World] Bank’s policy of encouraging poor countries to open their borders to trade, went on to explain why he thought that it was legitimate to encourage polluting industries to move to poor countries. ‘The measurement of the cost of health-impairing pollution depends on the forgone earnings from increased morbidity and mortality,’ he wrote. So dangerous pollution should be concentrated ‘in the country with the lowest wages’.

He added: ‘I think the economic logic behind dumping a load of toxic waste in the lowest wage country is impeccable and we should face up to that.’

He also introduced the novel notion of the ‘under-polluted’ country. These included the ‘underpopulated countries in Africa’ where ‘their air quality is probably vastly inefficiently low compared to Los Angeles’. His point was that since clean air, which he calls ‘pretty air’, is valuable as a place to dump air pollution, it is a pity poor countries can’t sell their clean air for this purpose. If it were physically possible there would be a large ‘welfare-enhancing trade in air pollution. . .’ he says.

Summers admits in his much-faxed memo that there might be objections to his case, on moral grounds for instance. But he concludes by saying that ‘the problem with these arguments’ is that they ‘could be turned around and used more or less effectively against every Bank proposal for liberalisation’.

‘What he is saying,’ comments British environmentalist Nicholas Hildyard, ‘is that this argument represents the logical conclusion of encouraging free trade round the world.’

Why it’s cheaper to poison the poor (New Scientist)

He never stops thinking like an economist!!! Um, yay?

The sociopathic logic above is the “logical” outcome of doing a cost-benefit analysis involving “tradeoffs” – the stock in trade of economics as a governing philosophy, which we’ll look at more closely in a bit.

Here are some more of Larry Summers’s greatest hits:

Fresh off his success in Lithuania, Summers moved to the World Bank, where he was named the chief economist in 1991, the year he issued his famous let’s-pollute-Africa memo. It was also the year that Summers, and his Harvard protégé Andrei Schleifer (who worked with Summers on the Lithuania economic transformation), began their catastrophic “rescue” of Russia’s crisis-ridden economy. It’s a complicated story involving corruption, cronyism and economic devastation. But by the end of the 1990s, Russia’s GDP had collapsed by more than 60 percent, its population was suffering the worst death-to-birth ratio of any industrialized nation in the twentieth century, and the financial markets that Summers and Schleifer helped create had collapsed in what was then the world’s biggest debt default ever. The result was the rise of Vladmir Putin and a national aversion to free markets and anything associated with Western liberalism.

The Summers Conumdrum (The Nation)

Behold, the results of “liberal” economists. My core point is this: this kind of autistic “economic thinking” is the very reason why the voting public believes there is no substantial difference between the Republicans and the (Neoliberal) Democrats. And they’re right! It’s also worth noting that Professor Cowen has let the cat out of the bag, tacitly admitting that the very discipline of economics is inherently right-wing (it makes him suspect among the left…). Yet it still masquerades as ideologically neutral!

Which brings me to a topic I’ve wanted to mention. A new book by New York Times economic columnist Binyamin Applebaum explains how this kind of “economic thinking” has come to dominate the actions of the world’s governments in place of all other social factors. But it wasn’t always so. Quite the opposite! In fact, economics…

…was not always the imperial discipline. Roosevelt was delighted to consult lawyers such as [Adolf] Berle, but he dismissed John Maynard Keynes as an impractical “mathematician.” Regulatory agencies were headed by lawyers, and courts dismissed economic evidence as irrelevant. In 1963, President John F. Kennedy’s Treasury secretary made a point of excluding academic economists from a review of the international monetary order, deeming their advice useless. William McChesney Martin, who presided over the Federal Reserve in the 1950s and ’60s, confined economists to the basement…In the 1950s, a Columbia economist complained he made as much as a skilled carpenter.

How Economists’ Faith in Markets Broke America (The Atlantic)

But it was not to last. Applebaum’s book details how economists became the de-facto technocratic rulers of society, supplanting all other notions of good and effective governance. The story begins, ironically, with Roosevelt’s New Deal, which…

…created a new need for economists. [It] inflated the size of the federal government, and politicians turned to economists to make sense of their new complicated initiatives and help rationalize their policies to constituents. Even Milton Friedman, the dark apostle of market fundamentalism, admitted that “ironically, the New Deal was a lifesaver.” Without it, he said, he may have never been employed as an economist. From the mid-1950s to the late 1970s the number of economists in the federal government swelled from about 2,000 to 6,000. The New Deal also gave rise to cost-benefit analysis. Large projects, like dam building or rural electrification, needed to be budgeted and constrained…

The Tyranny of Economists (The New Republic)

This gave rise to the kind of cost/benefit analysis described above, where absolutely everything—human life, the ecosystem, labor, healthy communities, etc.—had its price, and that price became a part of painful-but-necessary “tradeoffs”; a totally new way of thinking about how to govern society. This concept of a cost/benefit analysis, even though it produced distinctive winners and losers, wasn’t seen as a problem, because…

…the government could theoretically redirect a little money from the winners to the losers, to even things out: For example, if a policy caused corn consumption to drop, the government could redirect the savings to aggrieved farmers. However, it didn’t provide any reason why the government would rebalance the scale, just that it was possible.

What is now called the Kaldor-Hicks principle, “is a theory, “ Appelbaum says, “to gladden the hearts of winners: it is less clear that losers will be comforted by the possession of theoretical benefits.” The principle remains the theoretical core of cost-benefit analysis, Appelbaum says. It’s an approach that sweeps the political problems of any policy—what to do about the losers—under the rug.

The Tyranny of Economists (The New Republic)

In fact, many of the proponents of global “free-trade” openly acknowledged that there would inevitably be “winners” and “losers” from such policies. But, they claimed, some of the gains of the winners could be easily siphoned off to compensate the losers, making everyone better off in the long run. Win-win thinking at its finest.

It should be obvious by now what kind of a sick joke that was. It should also be proof positive just how drastically economic theory never matches the reality.

It was also World War two that ushered in the concept of Gross Domestic Product, or GDP (originally Gross National Product, or GNP), which was designed to measure total national output for the war. Even the economists (Kuznets, et. al.) who created it explicitly warned that it was not to be taken as a be-all and end-all measure of societal health or well-being. It was designed to manage the War Economy, and its continual increase was not to be regarded as an end in itself.

Yet that’s exactly what it became thanks to economists.

It was the ultimate triumph of “market society” as Polanyi described it. Markets and money were now the sole governing principles. Political decisions were reduced to simply a series of cost-benefit analyses, freeing politicians from any moral culpability for their decisions. Governing society was no longer about increasing the general welfare as the Framers of the Constitution imagined—it would now be simply about increasing GDP and making the necessary “tradeoffs”.

With the Neoliberal revolution, economists emerged from the basement and took over the place:

Starting in the 1970s…economists began to wield extraordinary influence. They persuaded Richard Nixon to abolish the military draft. They brought economics into the courtroom. They took over many of the top posts at regulatory agencies, and they devised cost-benefit tests to ensure that regulations were warranted. To facilitate this testing, economists presumed to set a number on the value of life itself; some of the best passages of Appelbaum’s fine book describe this subtle revolution. Meanwhile, Fed chairmen were expected to have economic credentials. Soon the noneconomists on the Fed staff were languishing in the metaphorical basement.

But, in the wake of the Powell Memorandum, the biggest beneficiaries were big business, who soon poured bottomless amounts of money into economics departments (such as the one that employs Cowen as well as Summers’s Harvard) and a dizzying array of “think-tanks” which employed the ever-expanding number of economics graduates. Economics soon went from virtual obscurity to one of the most popular majors at American universities, especially for children of the affluent. In the 1980’s, big corporations and the wealthy…

…soon found a powerful ally in economists, a vast majority of whom opposed regulation as inefficient. Corporations began to argue that if the cost of compliance to a new regulation (say seatbelts or lead remediation) exceeded the benefit, it shouldn’t be implemented. The government, starting at the end of Nixon’s administration and continuing to this day, agreed.

Cost-benefit analysis hinged on an ever-changing calculation of the monetary value of a human life. If a life could be shown to be expensive, regulation could be justified. If not, it would be blocked or scrapped. The EPA, in 2004—to allow for more lax air pollution regulations—quietly sliced eight percent off their value of human life, and then another three percent in 2008 by deciding to not adjust for inflation. The fluctuating value of life was a seemingly rational but conveniently opaque method for making political decisions. It simultaneously trimmed away the gray areas of political discourse by reducing the debate to a small set of numbers and obscured the policy in hundreds of pages of statistics, figures, and formulas. This marriage of rational simplicity and technocratic complexity provided cover for regressive policies that favored corporations over taxpayers. Economists reduced a question that dogged political philosophers for centuries—about how much harm is acceptable in a society—to a math problem.

The Tyranny of Economists (The New Republic)

Here’s another particularly vivid example of the results of that type of thinking:

In June of 1985, the Consumer Product Safety Commission issued a “national consumer alert” about the type of sofa chair that strangled [two-year-old Joy] Griffith. But the commission still needed to decide if they would require design changes. So Warren Prunella, the chief economist for the Commission, did some calculations. He figured that 40 million chairs were in use, each of which lasted ten years. Estimates said modifications likely would save about one life per year, and since the commission had decided in 1980 that the value of a life was one million dollars, the benefit of the requirement would be only ten million. This was far below the cost to the manufacturers. So in December, the commission decided that they didn’t need to require chair manufacturers to modify their products. If this seems odd today, it was then too—so odd, in fact, that the chair manufacturers voluntarily changed their designs.

Prunella’s calculations were the result of a growing reliance on cost-benefit analysis, something that the Reagan administration had recently made mandatory for all new government regulations. It signaled the rise of economists to the top of the federal regulatory apparatus. “Economists effectively were deciding whether armchairs should be allowed to crush children,” Binyamin Appelbaum writes in his new book The Economists’ Hour. “The government’s growing reliance on cost-benefit meant that economists like Prunella were exercising significant influence over life and death decisions.” Economics had become a primary language of politics.

The Tyranny of Economists (The New Republic)

And this how we got to the polices of today, where, as Margaret Thatcher confidently declared, “there is no alternative”:

“The United States experienced a revolution. No gun was fired. No lives were lost. Nobody marched. Most people didn’t notice. Nonetheless, it happened.”…what Appelbaum presents could be seen as a picture of a dramatic class-war, a conservative counter-revolution in reaction to the New Deal government, duplicitously legitimized by a regressive political theory: economics. Or as a more bracing economics writer, John Kenneth Galbraith, once put it: “What is called sound economics is very often what mirrors the needs of the respectably affluent.”

The Tyranny of Economists (The New Republic)

To reiterate, Larry Summers was Obama’s (a Democrat) chief economic advisor. What, then, is the difference between the two major political parties again?

…a 1979 survey of economists that “found 98 percent opposed rent controls, 97 percent opposed tariffs, 95 percent favored floating exchange rates, and 90 percent opposed minimum wage laws.” And in a moment of impish humor [Applebaum] notes that “Although nature tends toward entropy, they shared a confidence that economies tend toward equilibrium.” Economists shared a creepy lack of doubt about how the world worked.

The Tyranny of Economists (The New Republic)

No wonder Cowen (who manages the Koch-funded Mercatus Center at George Mason University) is such a fan! And thus you get his laudatory praise of how Summers is always “thinking like an economist” despite his alleged “liberal values”.  So when you are urged, for example, to “think like an economist” you are all but guaranteed to come up with conclusions which overwhelmingly favor the rich and powerful and screw over the rest of us. And all of this is presented as totally nonpolitical and “just common sense!”

Isn’t it funny how “bad economics” is anything that helps labor and the working class?

However, this kind of quasi-religious faith in free trade and free markets has shown a remarkable and disastrous lack of effectiveness in the real world:

Inequality has grown to unacceptable extremes in highly developed economies. From 1980 to 2010, life expectancy for poor Americans scandalously declined, even as the rich lived longer. Meanwhile, the primacy of economics has not generated faster economic growth. From 1990 until the eve of the financial crisis, U.S. real GDP per person grew by a little under 2 percent a year, less than the 2.5 percent a year in the oil-shocked 1970s.

How Economists’ Faith in Markets Broke America (The Atlantic)

…the theories often demonstrably did not do what they were supposed to do. Monetarism didn’t curb inflation, lax antitrust and low regulation didn’t spur innovation, and low taxes didn’t increase corporate investment. Big economic shocks of the 1970s, like the befuddling “stagflation,” provided reasons to abandon previous, more redistributive economic regimes, but a reader still burns to know: How could economists be so wrong, so often, and so clearly at the expense of the working people in the United States, yet still ultimately triumph so totally? It’s likely because what economists’ ideas did do, quite effectively, was divert wealth from the bottom to the top. This entrenched their power among the winners they helped create.

The Tyranny of Economists (The New Republic)

And this type of thinking has now permeated the entire world as Neoliberalism encircled the globe, from Chile to China. And as a result, we see the entire world burning down – metaphorically in the case of places like Chile, Lebanon, Syria, France, Spain, Russia, Indonesia, Hong Kong, and even New York City; and literally in places affected by climate change like California. It’s also led to the majority of the world’s population living under some kind of strong-man authoritarian rule, with surveillance states expanding daily and democracy under dire threat everywhere.

New Delhi now has to distribute gas masks to students for them to even go outside. Isn’t it time we stopped listing to the economists, even the allegedly “liberal” ones like Larry Summers as well as overtly pro-corporate Libertarians like Cowen? In reality, they are all of a piece, and it’s time for these sociopaths to go into the dumpster of history where they belong. John Maynard Keynes himself hoped that economists would eventually become “about as important as dentists.” But that’s drastically unfair to dentists—they are far more useful and have done far less harm to civilization! Carpenters and dentists provide real benefits to society. Economists, however, should probably be treated the way witches were treated in Medieval Europe. To paraphrase Diderot: Man will never be free until the last CEO is strangled with the entrails of the last economist.

Inequality in Old Europe

I’ve not had much time to write – I’m sprucing up Hipcrime Vocab international headquarters in case I want to sell it and relocate. I’ll say more about that another time.

This article is particularly interesting given what we talked about last time concerning the Iroquois culture. It’s about a study of a Bronze Age farming settlement in Europe (modern-day Augsburg) and concludes, “Social Stratification Dates Back to Bronze Age Societies.” The societies studied by the researchers were:

. . .members of Central European farming communities that spanned from the late Neolithic period through the Bronze Age—or from around 2800 B.C. through 1300 B.C.

So I”m guessing this is the Corded Ware, Bell Beaker and Funnelbeaker cultures in particular (or similar cultures). Most likely they spoke an Indo-European language and may have been proto-Celtic.

. . . it has long been assumed that prior to the Athenian and Roman empires,—which arose nearly 2,500 and more than 2,000 years ago, respectively—human social structure was relatively straightforward: you had those who were in power and those who were not.

A study published Thursday in Science suggests it was not that simple. As far back as 4,000 years ago, at the beginning of the Bronze Age. . .human families of varying status levels had quite intimate relationships. Elites lived together with those of lower social classes and women who migrated in from outside communities. It appears early human societies operated in a complex, class-based system that propagated through generations.

Ancient Teeth Reveal Social Stratification Dates Back to Bronze Age Societies (Scientific American)

I’m not sure it was ever assumed to be that simple, but whatever. The interesting thing here is what it says about the creation of inequality. What we see here is a household structure, with various individuals ranked within it. People of different status lived cheek-to-jowl, and this is revealed by the burials:

Related individuals, the study’s authors found, were laid to rest with goods and belongings that appeared to be passed down through generations. The unrelated people in the household were buried with nothing, suggesting they were a lower class of “family members,” who were not given the ceremonial treatment.

“We don’t know if the low-status individuals in Augsburg were slaves, menial staff or something else,” comments Philipp Stockhammer of the Ludwig Maximilian University of Munich, who was a co-author of the new study. “But we can see that in every household, individuals of very different status were living together.

So, then, it’s quite likely that inequality first appeared within households, before it became institutionalized more broadly. Second, my guess is that certain lineages became ranked lineages, with some having a claim to a more ancient or revered ancestor, for instance. When you combine these two factors, you get a two-pronged stratification giving rise to inequality: one interfamilial—between different Houses, and one intrafamilial—between different individuals within the House. The highest-up individuals of the highest-up Houses were probably the most important decision-makers (chiefs or kings). However, without a permanent standing army or police, like the Iroquois, there was no way for potential leaders to impose their will on the rest of the tribe.

It’s quite possible that this was a sort of feudal-style order based around cattle ownership. In his Lectures on the History of Early Institutions, Henry Maine considered whether the feudal system as it developed in post-Roman Europe grew out of the land-tenure laws of the Celtic and Germanic tribal cultures that occupied the continent.

Under this system, the lands of a tribe (fine) were not owned outright by any single individual, although the chiefs (flaiths) may have possessed small portions of their own land. The chiefs did manage the land, however, giving them a considerable degree of control over the grazing herds. They loaned out portions of the herd to other tribe members, a practice called giving stock. The receivers of stock became vassals (céiles) of the chief, with certain obligations, including military duties. The amount of stock received from the chief determined one’s social status. Those who owed a little stock were Saer (free) tenants; those with more loans were Daer (base) tenants. The Daer tenants had the more onerous obligations. There were also freemen with no property, and an unfree servile class, with differing degrees of legal protection (Bothachs, Sen-Cleithes, and fuidhirs), with fuidhirs also subdivided into Saer and Daer. These folks had no clan affiliation, and were tantamount to slaves:

Every considerable tribe, and almost every smaller body of men contained in it, is under a Chief, whether he be one of the many tribal rulers whom the Irish records call Kings, or whether he be one of those heads of joint-families whom the Anglo-Irish lawyers at a later date called the Capita Cognationum. But he is not owner of the tribal land. His own land he may have…and over the general tribal land he has a general administrative authority…and, probably in that capacity, he has acquired great wealth in cattle…

It has somehow become of great importance to him to place out portions of his herds among the tribesmen…Thus the Chiefs appear in the Brehon law as perpetually ‘giving stock,’ and the tribesmen as receiving it…It is by taking stock that the free Irish tribesman becomes the Ceile or Kyle, the vassal or man of his Chief, owing him not only rent but service and homage…

The new position which the tribesman assumed through accepting stock from a Chief varied according to the quantity of stock he received. If he took much stock he sank to a much lower status than if he had taken little. On this difference in the quantity accepted there turns the difference between the two great classes of Irish tenantry, the Saer and Daer tenants…

The Saer-stock tenant, distinguished by the limited amount of stock which he received from the Chief, remained a freeman and retained his tribal rights in their integrity. The normal period of his tenancy was seven years, and at the end of it he became entitled to the cattle which had been in his possession. Meantime he had the advantage of employing them in tillage, and the Chief on his part received the ‘growth and increase and milk,’…besides this it entitled the Chief to receive homage and manual labour; manual labour is explained to mean the service of the vassal in reaping the Chief’s harvest and in assisting to build his castle or fort, and it is stated that, in lieu of manual labour, the vassal might be required to follow his Chief to the wars.

Any large addition to the stock deposited with the Saer-stock tenant, or an unusual quantity accepted in the first instance by the tribesman, created the relation between vassal and chief called Daer-stock tenancy. The Daer-stock tenant had unquestionably parted with some portion of his freedom, and his duties are invariably referred to as very onerous. The stock given to him by the Chief consisted of two portions, of which one was proportionate to the rank of the recipient, the other to the rent in kind to which the tenant became liable…Beside the rent in kind and the feudal services, the Chief who had given stock was entitled to come, with a company of a certain number, and feast at the Dear-stock tenant’s house, at particular periods, for a fixed number of days…

…the relation out of which Daer-stock tenancy and its peculiar obligations arose was not perpetual. After food-rent and service had been rendered for seven years, if the Chief died, the tenant became entitled to the stock; while, on the other hand, if the tenant died, his heirs were partly, though not wholly, relieved from their obligation. At the same time it is very probable that Daer-stock tenancy, which must have begun in the necessities of the tenant, was often from the same cause rendered practically permanent…

…the effect of the ancient Irish relation was to produce, not merely a contractual liability, but a status. The tenant had his social and tribal position distinctly altered by accepting stock. Further, the acceptance of stock was not always voluntary. A tribesman, in one stage of Irish custom at all events, was bound to receive stock from his own ‘King,’ or, in other words, from the Chief of his tribe in its largest extension; and everywhere the Brehon laws seem to me to speak of the acceptance of stock as a hard necessity.

https://oll.libertyfund.org/titles/2040#Maine_1413_94

Once again we see that status is dependent upon credit/debt relationships. Over time, these relationships become solidified. The chief who distributes cattle to the tribe is also the chief who distributes booty in raids, and cattle rustling is a frequent theme in early Irish literature. We don’t know if the social structure of these ancient central European farming communities was close to that of tribal Ireland, but it may have been.

Another clue to the social structure comes from another finding:

By radio dating the teeth samples and comparing them with regional geographical radioactivity profiles, Stockhammer and his collaborators also determined where each person grew up. Traces of radioactive elements called isotopes are all around us, including in our food and water. From childhood, these elements are incorporated into our bones and can be used to determine where someone was raised. The results show that in nearly all of the households studied, there were females who hailed from elsewhere.

Whereas the remains suggest that farmsteads were passed through many generations of males—up to five in some cases—females only persisted in a community for one generation. This observation means a system of patrilocality was followed: men stayed in their place of upbringing, while women moved in with their husband’s family. Patrilocal cultures had previously existed, including far back in the Paleolithic, but the findings support the idea that the practice became more common as the organization of societies developed.

Stockhammer points out that social structure has long been a major topic in archeology and that countless studies have explored the communal interactions of ancient societies. Yet he feels the new study illuminates the transition of societal organization as we moved, from the late Stone Age to the Bronze Age, toward individual families living with those of a subservient class and women from other communities. “We added a new aspect to the current state of the art: the integration of genetic, isotopic and archaeological data, which helped us understand the complexity of past social structures,” Stockhammer says. Though he is resolute that his findings cannot directly be correlated with other ancient societies, he does draw a comparison with classical Greece’s oikos family structure and Rome’s familia, in which slaves and those of lower status were part of the family.

Indeed, Greece and Roman cultures initially developed out of such farming communities. The oikos and the familia were extended households that formed the smallest constituent part of these societies. They were united by kinship under the authority of the patriarch, as Maine argued in Ancient Law:

It would be a very simple explanation of the origin of society if we could … suppose that communities began to exist wherever a family held together instead of separating at the death of its patriarchal chieftain.

In most of the Greek states and in Rome there long remained the vestiges of an ascending series of groups out of which the State was at first constituted. The Family, House, and Tribe of the Romans may be taken as the type of them, and they are so described to us that we can scarcely help conceiving them as a system of concentric circles which have gradually expanded from the same point.

The elementary group is the Family, connected by common subjection to the highest male ascendant. The aggregation of Families forms the Gens or House. The aggregation of Houses makes the Tribe. The aggregation of Tribes constitutes the Commonwealth. Are we at liberty to follow these indications, and to lay down that the commonwealth is a collection of persons united by common descent from the progenitor of an original family?

Of this we may at least be certain, that all ancient societies regarded themselves as having proceeded from one original stock, and even laboured under an incapacity for comprehending any reason except this for their holding together in political union. The history of political ideas begins, in fact, with the assumption that kinship in blood is the sole possible ground of community in political functions; nor is there any of those subversions of feeling, which we term emphatically revolutions, so startling and so complete as the change which is accomplished when some other principle—such as that, for instance, of local contiguity—establishes itself for the first time as the basis of common political action.

http://www.gutenberg.org/files/22910/22910-h/22910-h.htm#CHAPTER_V

What’s most interesting to me is the patrilocal method of residence. The Iroquois, as you recall, were matrilocal—tracing descent from the mother’s side and living with her clan. This latest study gives a boost to Engels’s theory that when matrilineal and matrilocal cultures were “overthrown” in favor of patriarchy, private property became inherited, giving rise to private property and inequality. Go back and read what he said about this in the previous post.

It’s worth noting that pastoral (cattle) cultures are—without any exception that I’m aware of—male dominated and patriarchal. Is the introduction of cattle the path to inequality? Contrast this with the matrilocal and (semi-) matriarchal culture of the Iroquois with their clan mothers. They had no large domesticated animals (which didn’t exist in North America), and practiced hoe-based farming, which was done mainly by women. It seems that this meant that women had higher status in that culture, with a subsequently flatter social hierarchy and less inequality of property.

As for when it was overthrown, Marija Gimbutas famously argued for years that “Old Europe” was matrilineal and matriarchal, and practiced “goddess worship” based on the large quantity of female figurines that she found. She then claimed that the Kurgan peoples swept in from the east and replaced the Old European culture with one that was much more warlike, patriarchal, and worshiped masculine gods. The farming peoples in this study would have been their descendants. They were also likely the ancestors of the various Indo-European cultures. While she may have overstated the importance of goddess-worship (on very little evidence), in many other respects Gimbutas may have been largely correct about the transition (she is backed up by recent DNA evidence). To what extent were these “low-status” individuals the earliest farmers and hunter-gatherers of Old Europe?

. . .Stockhammer believes marrying outside one’s community encouraged the cultural exchange of information, which ultimately led to the formation of new civilizations. Increasing social interactions with other communities allowed for a more efficient transfer of skills and goods to a wider population. “I am sure the fact that a large number of adult women from outside the society entered the society had an important effect—that new knowledge and technologies came with them,” he says.

Anthropologists and scientists from other fields refer to a concept called ratcheting, in which cultural information is not just shared and learned but also modified and improved. If ancient humans mingled with outside communities, countless kernels of know-how would have been borrowed and altered for both good and bad (more effective tools; more lethal weapons and warfare).

Individuals marrying outside of their community may have also made sense from the standpoint of genetic fitness and allowed local societies to thrive. Doing so would have prevented the genetic abnormalities that come from inbreeding and perhaps, in the long term, improved collective community survival.

Interestingly, our closest animal relatives, chimpanzees, are also female exogamous. That is, the females leave the community in which they were born, whereas the males stick around. Could this be a clue to human social relations? As a side note, even today, it feels like more women leave the places of their birth to seek out mates, while those who stay put are more often men. If I may be so bold, I suspect this is why women are so much more into travel than men are on average (there are exceptions; I’m one of them), and do so much more of it. In my failing Rust Belt city, for example, every woman not pregnant by twenty-one moves away to somewhere better, and only comes back to raise her kids (aside from the occasional boomerang).

Only humans can from these types of affinal relationships, and it does allow for much larger social agglomerations and transfer of information. Robin Dunbar talks about this in his book Human Evolution: Our Brains and Behavior. Chimps may be female exogamous, but there is no ongoing relationship between families, and hence no uniting of disparate chimp bands. Subsequently, there is no cultural or knowledge transfer between chimp bands; they are largely hermetically sealed. H. sapiens’ ability to overcome this limitation may have played a role in us coming to dominate the planet, and may have very deep roots, indeed.

The foundation of all of this may have been religion, specifically, the tutelary religion of the hearth, as Fustel de Coulanges eloquently describes in The Ancient City:

A family was composed of a father, a mother, children, and slaves. This group, small as it was, required discipline. To whom, then, belonged the chief authority? To the father? No. There is in every house something that is above the father himself. It is the domestic religion; it is that god whom the Greeks called the hearth master—εστια δεστοινα —whom the Romans called Lar familiaris. This divinity of the interior, or what amounts to the same thing, the belief that is in the human soul, is the least doubtful authority. This is what fixed rank in the family.

The father ranks first in presence of the sacred fire. He lights it, and supports it; he is its priest. In all religious acts his functions are the highest; he slays the victim, his mouth pronounces the formula of prayer which is to draw upon him and his the protection of the gods. The family and the worship are perpetuated through him; he represents, himself alone, the whole series of ancestors, and from hm are to proceed the entire series of descendants. Upon hm rests the domestic worship–he can almost say, like the Hindu, “I am the god.” When death shall come, he will be a divine being whom his descendants will invoke.

Consistent with the findings of the researchers about unrelated females moving in to male-centric houses, Fustel de Coulanges also found that Roman hearth religion had women leaving the house of their birth and becoming a part of their husband’s family:

This religion did not place woman in so high a rank. The wife takes part in the religious acts, indeed, but she is not the mistress of the hearth. She does not derive her religion from her birth. She is initiated into it at her marriage. She has learned from her husband the prayer that she pronounces. She does not represent the ancestors, since she is not descended from them. She herself will not become an ancestor, placed in the tomb, she will not receive special worship. In death, as in life, she counts only as a part of her husband.

Greek law, Roman law, and Hindu Law, all derived from this old religion, agree on considering the wife as always a minor. She could never have a hearth of her own; she was never the chief of a worship. At Rome she received the title of mater familial; but she lost this if her husband died. Never having a sacred fire which belonged to her, she had nothing of what gave authority in the house. She never commanded; she was never even free, or mistress of herself. She was always near the hearth of another, repeating the prayer of another, for all the acts of religious life she needed a superior, and for all the acts of civil life a guardian. pp. 68-69

And getting back to the initial theme of passing property down via inheritance seen in the burials of these communities, that too seems to have been intimately connected to the religious worship of the hearth according to Coulanges:

There are three things which, from the most ancient times, we find founded and solidly established in Greek and Italian societies: the domestic religion; the family; and the right of property — three things which had in the beginning a manifest relation, and which appear to have been inseparable. The idea of private property existed in the religion itself. Every family had its hearth and its ancestors. These gods could be adored only by this family and protected it alone. They were its property.

Now, between these gods and the soil, men of the early ages saw a mysterious relation. Let us first take the hearth. This altar is the symbol of a sedentary life; its name indicates this. It must be placed upon the ground; once established, it cannot be moved…The god is installed there not for a day, not for the life of one man merely, but for as long a time as this family shall endure, and there remains any one to support its fire by sacrifices, This the sacred fire takes possession of the soil, and marked it its own. It is the god’s property.

And the family, which through duty and religion remains grouped around its altar, is as much fixed to the soil as the altar itself. The idea of domicile follows naturally. The family is attached to the altar, the later is attached to the soil; an intimate relation, therefore, is established between the soil and the family. There must be his permanent home, which he will not dream of quitting, unless an unforeseen necessity constrains him to it. Like the hearth, it will always occupy this spot. This spot belongs to it, is its property, the property not simply of a man, but of a family, whose different members must, one after another, be born and die here. p. 48

Is this the origin of private property? In ancient Rome, when land (and slaves) were transferred between owners, such a transfer was accompanied by a “solemn ceremony” called mancipatio (the origin of the word emancipation). Over time, these became replaced by cash transfers and real estate markets, and inequality ran amok, eventually leading to Rome’s downfall.

How closely did these ancient European farming cultures resemble that of the ancient Greeks and Romans? After all, they were both based around farming and cattle-rustling. We can only speculate, as culture does not calcify unlike the elements in bones and teeth. The researchers’ invoking of Greek and Roman culture is telling, however. It certainly seems like they may have been quite similar. Hopefully, new methods like those used in the article will give us even more data to work with, as they hope:

University of Michigan archeologist Alicia Ventresca Miller, who was not involved in the paper, shares Stockhammer’s enthusiasm and feels this new work reveals a lot about early human inheritance of goods and property. “As far as I can tell, there are no other studies that have such large sample sizes and multiple analyses to come to these conclusions, especially for prehistoric groups,” she says. “Their finding that wealth was inherited, rather than achieved, has real impacts for research on inequality and will likely change our understanding of ancient Europe. The results give us insight into the complexity of ancient lifeways.”

Krishna Veeramah, a population geneticist in the department of ecology and evolution at Stony Brook University, who was also not involved in the study, thinks the new multidisciplinary research approach may serve as a model for future work, especially as characterizing ancient DNA becomes more affordable and widespread.

On a related note, it seems that these pre-state cultures were hardly static. Over at Peter Turchin’s blog, he writes:

As the readers of this blog know, a big chunk of my research focuses on why complex societies go through cycles of alternating internally peaceful, or integrative, phases and turbulent, or disintegrative periods. In all past state-level societies, for which we have decent data, we find such “secular cycles” (see more in our book Secular Cycles).

What was a surprise for me was to find that pre-state societies also go through similar cycles. Non-state centralized societies (chiefdoms) cycle back and forth between simple (one level of hierarchy below the chief) and complex (two or more hierarchical levels) chiefdoms. But now evidence accumulates that even non-centralized, non-hierarchical societies cycle. The work by archaeologists, such as Stephen Shennan, showed that various regions within Europe went through three or four population cycles before the rise of centralized societies (see, for example, his recent book The First Farmers of Europe).

These cycles were quite drastic in amplitude. For example, last month at a workshop in Cologne, I learned from archaeologists working in North Rhine that population declines there could result in regional abandonment. Several hypotheses have been advanced, including the effects of climate fluctuations, or soil exhaustion. But there is no scientific consensus—this is a big puzzle.

The Puzzle of Neolithic Cycles: the Strange Rise and Collapse of Tripolye Mega-Settlements (Cliodynamica)

The authors of the paper hypothesize that as power became too centralized, the various families and social groups comprising the culture simply dispersed rather than become subservient to permanent despotic power. Turchin thinks it was warfare, specifically protection of surpluses from nomadic outsiders:

First, why did the different groups move together in the first place? From almost any point of view, except one [defense], this was a really poor decision. Such crowding together resulted in serious problems with sanitation and disease. Additionally, farmers had to waste a lot of time traveling to their fields, because such huge settlement required a lot of land to support it. The only reason for such population concentration that makes sense to me is collective defense…

The second question is that at the end of the mega-settlement period, the population didn’t simply disperse out; there was a very substantial population collapse. Again, what was the reason for this? In historical periods the usual answer is pervasive endemic warfare. Not only war kills people, its effect on demography is even more due to the creation of a “landscape of fear,” which doesn’t permit farmers to cultivate fields, so that the local population gradually starves, has fewer babies, and is further diminished by out-migration…

However, the former hypothesis is consistent with James C. Scott’s ideas that people in early farming cultures were often looking for a way to get out from the bitter toil and backbreaking work of farming by abandoning it and becoming “barbarians.” This, he says, happened whenever authority became too coercive for too long. Those stockade walls were to keep the farmers in, not the barbarians out. Slate Star Codex recently reviewed Scott’s book:

Scott thinks of these collapses not as disasters or mysteries but as the expected order of things. It is a minor miracle that some guy in a palace can get everyone to stay on his fields and work for him and pay him taxes, and no surprise when this situation stops holding. These collapses rarely involved great loss of life. They could just be a simple transition from “a bunch of farming towns pay taxes to the state center” to “a bunch of farming towns are no longer paying taxes to the state center”. The great world cultures of the time – Egypt, Sumeria, China, whereever – kept chugging along whether or not there was a king in the middle collecting taxes from them. Scott warns against the bias of archaeologists who – deprived of the great monuments and libraries of cuneiform tablets that only a powerful king could produce – curse the resulting interregnum as a dark age or disaster. Probably most people were better off during these times.

The book ends with a chapter on “barbarians”. Scott reminds us that until about 1600, the majority of human population lived outside state control; histories that focus on states and forget barbarians are forgetting about most humans alive. In keeping with his thesis, Scott reviews some ancient sources that talk about barbarians in the context of people who did not farm or eat grain. Also in keeping with his thesis, he warns against thinking of barbarians as somehow worse or more primitive. Many barbarians were former state citizens who had escaped state control to a freer and happier lifestyle. Barbarian tribes could control vast trading empires, form complex confederations, and enter in various symbiotic relationships with the states around them. Scott wants us to think of these not as primitive people vs. advanced people, but as two different interacting lifestyles, of which the barbarian one was superior for most people up until a few centuries ago.

Book Review: Against The Grain (Slate Star Codex)

Speaking of reviews, I’ve finished reading Civilized to Death, and I suppose I should write a review. It’s no secret that I’m very partial to it’s thesis, but highlighting some especially relevant parts might be enlightening.

Is Socialism Contrary to Human Nature?

In a recent column, philosophy professor Ben Burgis takes on the idea that socialism is “unnatural”, that is, somehow contrary to basic human nature:

…one of the most persistent claims of socialism’s critics…is the idea that socialism is not just impractical or even immoral but unnatural. Economist and libertarian social critic Murray Rothbard, for example, entitled a book of his essays Egalitarianism as a Revolt Against Nature. Psychology professor and science popularizer Steven Pinker breezily asserts in The Blank Slate: The Modern Denial of Human Nature that “socialism and communism…run against our selfish natures.”

The grim view of human nature painted by these critics has roots as fresh as evolutionary psychology and as ancient as the doctrine of original sin. It’s been used to motivate arguments against socialism not just by libertarians like Rothbard, neoliberal centrists like Pinker, or various right-wing critics of progressivism … but even by someone as firmly “on the left” as The Young Turks (TYT) host Cenk Uygur. When a C-SPAN caller asked Uygur about Marxism, he flatly stated that, “Human nature does not work the way that communists want it to work.” In some versions of this critique, the point is generalized from human nature to nature in general…

Socialism and Human Nature (Arc Magazine)

Burgis goes on to debunk many of these criticisms, but I’d like to take them on from a different perspective. I’d like to use some of the historical and anthropological works we’ve looked at over the past few years. To do that, we need to take a look at some of the history of thought on this issue.

Fundamentally the question being asked here is, what is human nature? To some extent, this is an impossible question to answer. As Chris Ryan writes in Civilized To Death, “To ask ‘What is human nature?’ is like asking ‘What’s the natural state of [water]?’ So much depends on conditions. Liquid, solid, gas—temperature and pressure make all the difference.” (p. 120)

While the question of human nature may be impossible to answer definitively, let’s take a trip through the beginnings of anthropology and sociology to see if we can at least shed a little light on it. This will help us to determine whether or not human nature—so far as we can determine what it is—is fundamentally incompatible with socialist ideals of liberty, solidarity and collective ownership, as mainstream economists assert.

The Father of Anthropology.

We could start with Lewis Henry Morgan. He was an attorney and senator who lived in upstate New York in the nineteenth century. At this point in history, the Native American tribes, although decimated by disease and territorial expansion, still had remnants of their original social structure intact. Morgan became adopted into the Seneca tribe, which was part of what has come down to us as the Iroquois League (although they obviously did not call themselves that). He studied their social organization from a scientific/intellectual perspective. It was one of the first attempts by a Western-European based culture to understand other cultures rather than just obliterate them.

Morgan traveled around the western United States investigating the kinship relations of various Native American tribes. For tribes further afield, he sent questionnaires to missionaries who were living with tribal cultures on other continents.

Morgan is considered to be “the father of anthropology” for his discovery that the primordial form of human social organization was not solitary, contractual or territorial, as many Enlightenment philosophers had believed, but based on kinship; that is, descent from a common ancestor, whether real or imagined. The social arrangement based around this concept was called the tribe: “Tribes were the initial social structures human created to further their survival.” (1)

The flexibility of this arrangement stems from three features. First, if you traced your lineage further back in time to a more remote ancestor, then you could enlarge the circle of kinship. Thus, even disparate tribes could be conjoined if they had a more remote ancestor in common (again, real or imaginary—often times imaginary). Second, non-blood relatives could be “adopted” into an extended family to increase the size of the tribe, as Morgan had been. Such adoption would determine their social roles. This is called fictive kinship by anthropologists. Third was the joining of previously unrelated people through the marriage partnership, called affinal relationships. The children of such a union share the genetics of both their mother and father’s families, and therefore both families are invested in the offspring’s future and become interrelated even though they were not before (what we call in-laws in English, a rather bizarre term, as laws are usually a stand-in for kinship). As Robin Dunbar notes, marriage is not about the parent’s families; it’s about the children. Through affinity, families could be joined, recombined, and enlarged. Marriage was also a major source of political alliances until relatively modern times, even in Western Europe.

The other thing of note was that kinship was not at all uniform–whom one regarded as a parent, sibling, cousin, or other relative varied greatly across tribal cultures. For example, in the Iroquois culture, one’s mother was not just one’s biological mother, but all of her sisters as well (whom we would call aunts). Similarly, one’s father was not only one’s biological father, but also all of his brothers (whom we would call uncles). In many cultures, those whom we would call cousins were regarded no differently than our brothers and sisters, with the same terms being used for both. In some cultures, one’s biological father and his family were not even considered to be related to you at all—only your mother’s relatives were! In these cultures, the major male figure in your life would be your mother’s brother instead of your biological father. In many cultures, parallel cousins (children from a parent’s same-sex sibling) were considered your relatives, but cross cousins (from an opposite-sex sibling) were not.

Such kinship relations organized the tribe’s entire social structure. They defined one’s rights, duties, debts and obligations relative to the rest of the tribe. They also dictated whom one could and could not marry (e.g. one’s cross-cousin, but not one’s sibling, with parallel cousins often being considered siblings). Relationships were defined by status–older over younger, men (in some aspects) over women, husbands over wives, parents over children, brothers over cousins, kinsmen over strangers. etc. As later writers would put it, social relations were primarily status-based rather than contract-based (whether the contract was written or implied). Government-citizen; employer-employee; business partner, and even husband-wife relationships were examples of contract-based relationships in European societies (since marriage could be dissolved and was primarily concerned with legal matters such as inheritance).

All of this was more important than geography, which did not matter at all. You could live right next door to someone and, if they were not a part of your tribe, then you had no obligatory social relationships with them. There was no such notion as “citizenship” based on living in a common circumscribed territory, or under the same nominal government. Plus, many tribes throughout history were nomadic, and thus they could not have based their shared identity on any particular piece of land or territory. Of course, in practice, most people did live closest to their tribes and kinsmen, muddying the issue.

The notion of a social contract was equally problematic. Written contracts could not work, since most tribal cultures did not use writing. And the notion of an implied contract would be impossible without a shared cultural heritage (particularly language and religion), which only existed among members of the same tribe to begin with.

And so, it was clear from the evidence that kinship was the universal and primordial form of human social organization, with all others being later inventions, sometimes much later inventions. Such arrangements had been preserved in cultures all over the world even up until Morgan’s time (the late 1800’s), and still exist in remote places all over the world today.

Morgan wrote a book about his discovery called Systems of Consanguinity and Affinity of the Human Family. He then followed it up with his best-known work, Ancient Society, in which he outlined the basic structures along which he believed all ancient societies around the world had been constituted.

Morgan’s book was hampered by its depiction of societies progressing through definite stages from Savagery to Barbarism to Civilization. Aside from the derogatory connotations of these terms, the very notion of all cultures passing through a predictable series of stages has been soundly rejected by modern anthropologists, as has the notion of some societies being more “highly evolved” than others (although this is enjoying a recrudescence thanks to the Alt-right). Social development does not have a direction, up, down or otherwise. In Morgan’s defense, however, nearly all scholars in the nineteenth century framed their arguments as a sequence of stages, probably influenced by biological evolution. This mistaken notion colored the thinking of scholars in all sorts of disciplines regardless of their political views until deep into the twentieth century.

Despite this drawback, the core of Morgan’s work—his argument that consanguineal kinship formed the basis of human social organization, with all other arrangements following later—has proven basically correct. Subsequent anthropology was built on this foundation, even if rejected some of Morgan’s other claims.

Dunbar’s Number and fractals

If we accept that kinship and tribes were the primordial form of human social organization, is there a way of knowing the size of those tribes? In other words, apropos of our discussion of kinship groupings, is there a “natural” size of social organization?

Although Morgan didn’t know it at the time, subsequent research has come up with an answer. The evolutionary psychologist Robin Dunbar had the idea that the size of the neocortex relative to the rest of the brain was roughly correlated to group size in social animals. From this, he calculated that the “mean group size” of humans would be around 148 people, often rounded up to 150. This has been called “Dunbar’s Number” in his honor. Another group of researchers arrived at a larger number, around 290 (with a median of 231)—the Bernard/Kilworth number.

Furthermore, there are a series of widening “concentric circles” of social relations, with each having a different relation to an individual. Again, this is due to the cognitive limits in the human brain.

According to the theory, the tightest circle has just five people – loved ones. That’s followed by successive layers of 15 (good friends), 50 (friends), 150 (meaningful contacts), 500 (acquaintances) and 1500 (people you can recognise). People migrate in and out of these layers, but the idea is that space has to be carved out for any new entrants.

Dunbar’s number: Why we can only maintain 150 relationships (BBC Future)

Even though the numbers differ on the amount of close, personal relationships we can maintain, we can see that there is a definite upward limit on that number. It is not 500. It is not 1,000, or 10,000, and certainly not a million. This means that there is a “natural” size limit for human social groups, based on our evolutionary heritage.

Dunbar’s number is found all throughout human social groupings, from the size of military units, to the size of Mennonite villages, to the size of small businesses, to the length of people’s Christmas card lists (back when people still did that). And while Iroquois villages could be as large as several thousand people, the longhouse—the center of village life—tended to hold between 20-90 people, consisting of several matrilineal families, each with their own space. The longest one ever excavated could hold 180 people.

According to Dunbar and many researchers he influenced, this rule of 150 remains true for early hunter-gatherer societies as well as a surprising array of modern groupings: offices, communes, factories, residential campsites, military organisations, 11th Century English villages, even Christmas card lists. Exceed 150, and a network is unlikely to last long or cohere well.

Dunbar’s number: Why we can only maintain 150 relationships (BBC Future)

Human social organization also appears to be fractal. Again, Morgan could not have known this, since the concept of fractals was only described by mathematician Benoit Mandelbrot in 1975. A fractal is a geometric figure where each part has the same statistical character as the whole. That is, self-similar patterns occurring at progressively larger and smaller scales.

The recursive (or fractal) nature of human social organization allows it to scale up and down as needed, presumably based around the Dunbar number, above. Thus, the structure of the family is basically the same as that of the clan, which is basically the same as that of the tribe, which is the same as that of the nation, which is the same as that of the confederacy; repeating the same basic structure at progressively larger scales. Another way of describing this structure is recursive, much like Russian Matryoshka dolls nesting inside each other.

Incidentally, only humans seem to capable of doing this. Even our closest animal relatives—chimpanzees and bonobos—despite being gregarious and highly social animals, cannot accomplish this sort of flexible social scaling. Language is an example of another recursive structure unique to humans, which also plays a social grooming and bonding role.

Numerous scholars in the nineteenth century noted that In the age of monarchy, rulers would commonly depict themselves as benevolent fathers, with their subjects as the children, and the kingdom being a kind of large-scale family. Even in the twentieth century, dictators who ruled over millions of people like Joseph Stalin and Mao Zedong often depicted themselves as benevolent uncles.

****

So, now we’ve limned at least a little bit of what human nature might look like. It’s based on kinship—whether real or fictive—and based around social groupings of no greater than perhaps 150-250 close relationships (what has been humorously termed the monkeysphere), repeated on progressively larger and larger scales, creating a sort of nested group of widening social circles, with different social relationships inside each circle.

By contrast, living in larger, denser environments surrounded mostly by strangers has been sometimes referred to as a Behavioral Sink. This is based on John C. Calhoun’s studies of mouse behavior in overcrowded and highly stressful environments.

The Iroquois

Morgan’s work had a huge impact in Europe, and he was cited by scholars as diverse as Charles Darwin, Sigmund Freud, and Karl Marx.

It’s that last one that especially concerns us given our topic of conversation. Morgan wrote several papers on the Iroquois system of economic organization, as well as describing it in Ancient Society.

However, it is important to note that social, economic and political relationships were not typically separated from each other in tribal cultures–all were interwoven. This characteristic of early economies was referred to by economic historian Karl Polanyi as embeddedness. Embeddedness refers to the degree that “economic” behavior (for individual pecuniary gain) is constrained by non-economic factors, whether political, religious, or social. Status in most Native American societies, for example, was not conferred by wealth, and the hoarding of wealth would have been impossible due to social pressures. In some Native American cultures (notably the Pacific Northwest), social status was conferred by giving away acquired wealth in elaborate ceremonies (potlatch). Would-be chiefs would compete to see who could give away the most, and hence gain the most prestige.

The Iroquois longhouses previously mentioned were home to several matrilineal clans–people related to each other on their mother’s side. Because of their matrilineal nature, the longhouses were collectively managed by the matriarchs (eldest women) of each clan (the Clan Mothers), who formed a Clan Mothers Council. No one went without shelter.

Iroquois villages had a system of communal land distribution–all land was owned in common by the entire village and parceled out to various clans and families to cultivate. Clans had usufructary rights, meaning they had rights to the produce generated from land that they farmed collectively, but they had no permanent ownership claims over the land itself. So long as produce was being cultivated on it, an individual or clan had rights to the plot. Once a plot was no longer cultivated, it reverted to the collective ownership of the tribe, to be redistributed as needed by the Clan Mothers Council. Thus, when land was “sold” to Europeans, the Iroquois had no understanding that they had given up rights to it permanently (hence the pejorative term, “Indian givers”). Land was considered sacred, and thus could not “belong” to any single person or family.

Agriculture was based around the “three sisters”–corn, beans and squash. Plows were not used; agriculture was slash-and burn (swidden), meaning that new plots were cleared each growing season, and worked with hoes. If useful farmland around the village became depleted, the village had to move to another location (another reason social relations could not be based around land).

Work was performed cooperatively. There was a typical gendered division of labor, with women responsible for child-rearing, planting, cultivating and harvesting crops; while men did hunting fishing, trading, building and forest management. The majority of the tribe’s goods were produced by the women. No money was used–economic exchange was conducted as a gift-based economy.

The produce of the village was stored in collective granaries in the longhouse, which each clan had access to as needed. Everyone had as much as they needed, and no one starved or went hungry.

As for political leadership, Iroquois tribes were not completely egalitarian, but were led by a sachem, what we might crudely refer to as a chief. But, critically, the office of sachem was conferred to an individual by members of the tribe via collective deliberation, and could be rescinded at any time if the sachem failed to perform adequately. And—most critically of all—the office of sachem was not hereditary. A sachem’s family did not have any special claim on the office, and it was not passed down from father to offspring, as was monarchy in Europe. Nor did the sachem have special claim to excessive wealth far greater than that of other members of his tribe.

Sachems could not issue orders or command other people to do things. They could not exert control over other members of the tribe, like a European king or dictator, or a capitalist boss or executive. There was no permanent, standing army or police force among the Iroquois, so there was no way of enforcing any orders by the sachem or anybody else. Every member of the tribe was free to do as he or she pleased, bound only by the requirements of kinship and social convention.

Among the Iroquois league, decisions were made collectively by the various representatives of the tribes, with no one tribe being able to strongarm the other ones. Here is a good description from testimony before the bureau of Indian Affairs given by one Cephas A. Watt (2):

The CHAIRMAN. Mr. Watt, you stated that you weren’t one of the regular chiefs. How are the regular chiefs selected here among the Indians?
Mr. WATT. They are elected by the clan.
The CHAIRMAN. Elected by the clan. Explain what you mean by that.
Mr. WATT. They have the clan system. There are eight clans.
The CHARIMAN. Among the Senecas?
Mr. WATT. Among the Senecas; and there are four on each side of the house, you see. The four clans here do not intermarry, which is their general rule, ad they have to go over to the other four clans to select their mates; and those clans, or clan mothers appoint the chiefs, that is, the oldest woman in the clan appoints him. Of course, these clans all follow their mother’s side, it doesn’t make any difference what the father might be–he might be a Japanese, but so long as he has an Indian mother the children are legitimate Senecas, and they share in all the usual privileges. So they select some of the clan whom you might say is “a good Indian” of the clan, as their chief. There might be 25 in that clan, and they select on of them as chief.
The CHAIRMAN. By whom is he selected; by the clan itself, or by the clan mother, you say?
Mr. WATT. Yes; by the clan mother. Of course, they all have to agree.
The CHAIRMAN. But the selection is made by the clan mother?
Mr. WATT. Yes. And then they are confirmed by the Six Nations chief, and the head chief of the Six Nations.
Mr. HARRISON. But you have to have a condolence meeting of your clan, don’t you?
Mr, WATT. Yes, they have a condolence ceremony, where they are lectures by the chiefs; admonished to be good.
Senator WHEELER. Do they always follow the admonition that they get?
Mr. WATT. Well. some of them do, but when they see that the chief himself is not living up to the requirements as to his character it is different.
The CHAIRMAN. What is the term of office of these regular chiefs?
Mr. WATT. It is supposed to be for life. That is, it is for as long as he is of–
Senator WHEELER interposing. Good behavior.
Mr. WATT. Yes.
The CHAIRMAN. And if the chief dies or resigns, a new meeting is held and a new chief is selected?
Mr. WATT. A new meeting is held an a new chief is selected.
The CHAIRMAN. I understood you to say that you were a chief of come kind here. How did you get your title of “chief”?
Mr. WATT. I haven’t been through the condolence ceremony yet, but I have been appointed by my clan mother and I have been elected, and it holds good until a condolence ceremony is held.
The CHAIRMAN. What clan do you belong to?
Mr. WATT. I don’t just understand, but I think I belong to the Heron clan.
Senator WHEELER. That is a fish clan?
Mr. WATT. Long-legged bird.

Ancient Law and Primitive Property

Now, the obvious objection to this is: sure, that’s fine for bunch of “primitive” savages running around the forests of North America, but what’s all that got to do with Europe?

It’s here where a few other scholars come into the mix. One of them based his experiences not on the natives of North America, but on India (the actual Indians).

Sir Henry Sumner Maine was a judge, legislator and legal scholar who had spent many years in British India. There he noted that Indian law was profoundly different than that of England. In India, the laws were based around something he called the “Joint Undivided Village” or the “Joint Undivided Family” which was headed by the eldest male clansman, who exercised “despotic” control over the whole family. Laws had little to do with individual behavior, contracts, or inheritance, because these were not relevant to village society.

From his studies of the legal codes of the ancient Romans, the ancient Germanic laws which had been transcribed (such as the Salic Law), and the laws of ancient Ireland (the Brehon Laws), Maine concluded that these ancient law codes could only be properly understood by the realization that these societies had all been constituted along the same basic lines as the Indian villages he had witnessed! From this he concluded that the basic unit of ancient societies was not the individual, but the extended family (ie. House or Clan):

The Family then is the type of an archaic society in all the modifications which it was capable of assuming; but the family here spoken of is not exactly the family as understood by a modern. In order to reach the ancient conception we must give to our modern ideas an important extension and an important limitation. We must look on the family as constantly enlarged by the absorption of strangers within its circle, and we must try to regard the fiction of adoption as so closely simulating the reality of kinship that neither law nor opinion makes the slightest difference between a real and an adoptive connection.

On the other hand, the persons theoretically amalgamated into a family by their common descent are practically held together by common obedience to their highest living ascendant, the father, grandfather, or great-grandfather. The patriarchal authority of a chieftain is as necessary an ingredient in the notion of the family group as the fact (or assumed fact) of its having sprung from his loins; and hence we must understand that if there be any persons who, however truly included in the brotherhood by virtue of their blood-relationship, have nevertheless de facto withdrawn themselves from the empire of its ruler, they are always, in the beginnings of law, considered as lost to the family.

It is this patriarchal aggregate—the modern family thus cut down on one side and extended on the other—which meets us on the threshold of primitive jurisprudence. Older probably than the State, the Tribe, and the House, it left traces of itself on private law long after the House and the Tribe had been forgotten, and long after consanguinity had ceased to be associated with the composition of States. It will be found to have stamped itself on all the great departments of jurisprudence, and may be detected, I think, as the true source of many of their most important and most durable characteristics. At the outset, the peculiarities of law in its most ancient state lead us irresistibly to the conclusion that it took precisely the same view of the family group which is taken of individual men by the systems of rights and duties now prevalent throughout Europe.

The Ancient Greeks and Romans, the Celts, the Germans, the Slavs—each and every one had begun as a collection of village communities where kinship defined the overall social structure and all major resources—except for minor chattels—had been owned and held in common.

In other words, very similar to what Morgan described of the Iroquois.

Maine describes a progression from the Joint Undivided Family, to the House Community, to the Village Community (his terms). At each stage, property becomes less communal and more alienable:

…The group which I have placed at the head, the Hindu Joint Family, is really a body of kinsmen, the natural and adoptive descendants of a known ancestor…so long as it lasts, it has a legal corporate existence, and exhibits, in the most perfect state, that community of proprietary enjoyment which has been so often observed…in cultivating societies of archaic type…

The House-Community, which comes next in the order of development, has been examined by M. de Laveleye, and by Mr. Patterson, in Croatia, Dalmatia, and Illyria…These House-Communities seem to me to be simply the Joint Family of the Hindus, allowed to expand itself without hindrance and settled for ages on the land. All the chief characteristics of the Hindu institution are here—the common home and common table, which are always in theory the centre of Hindu family life; the collective enjoyment of property and its administration by an elected manager.

Nevertheless, many instructive changes have begun which show how such a group modifies itself in time. The community is a community of kinsmen; but, though the common ancestry is probably to a great extent real, the tradition has become weak enough to admit of considerable artificiality being introduced into the association, as it is found at any given moment, through the absorption of strangers from outside. Meantime, the land tends to become the true basis of the group; it is recognised as of preeminent importance to its vitality, and it remains common property, while private ownership is allowed to show itself in moveables and cattle.

In the true Village-Community, the common dwelling and common table which belong alike to the Joint Family and to the House-Community, are no longer to be found. The village itself is an assemblage of houses, contained indeed within narrow limits, but composed of separate dwellings, each jealously guarded from the intrusion of a neighbour. The village lands are no longer the collective property of the community; the arable lands have been divided between the various households; the pasture lands have been partially divided; only the waste remains in common.

Thus, Maine came to the conclusion that, “We have the strongest reason for thinking that property once belonged not to individuals nor even to isolated families, but to larger societies composed on the patriarchal model;” and that “by far the most important passage in the history of Private Property is its gradual elimination from the co-ownership of kinsmen.” In addition, transfers of land were never between individuals, but between groups, and so it was far more ceremonious than the mere signing of a contract: “As the contracts and conveyances known to ancient law are contracts and conveyances to which not single individuals, but organised companies of men, are parties, they are in the highest degree ceremonious…” He found evidence of this all over Europe:

…the mode of transition from ancient to modern ownerships, obscure at best, would have been infinitely obscurer if several distinguishable forms of Village Communities had not been discovered and examined…

The chiefs of the ruder Highland clans used, it is said, to dole out food to the heads of the households under their jurisdiction at the very shortest intervals, and sometimes day by day. A periodical distribution is also made to the Sclavonian villagers of the Austrian and Turkish provinces by the elders of their body, but then it is a distribution once for all of the total produce of the year. In the Russian villages, however, the substance of the property ceases to be looked upon as indivisible, and separate proprietary claims are allowed freely to grow up, but then the progress of separation is peremptorily arrested after it has continued a certain time. In India, not only is there no indivisibility of the common fund, but separate proprietorship in parts of it may be indefinitely prolonged and may branch out into any number of derivative ownerships, the de facto partition of the stock being, however, checked by inveterate usage, and by the rule against the admission of strangers without the consent of the brotherhood.

It is not of course intended to insist that these different forms of the Village Community represent distinct stages in a process of transmutation which has been everywhere accomplished in the same manner. But, though the evidence does not warrant our going so far as this, it renders less presumptuous the conjecture that private property, in the shape in which we know it, was chiefly formed by the gradual disentanglement of the separate rights of individuals from the blended rights of a community.

Our studies in the Law of Persons seemed to show us the Family expanding into the Agnatic group of kinsmen, then the Agnatic group dissolving into separate households; lastly the household supplanted by the individual; and it is now suggested that each step in the change corresponds to an analogous alteration in the nature of Ownership.

Many remnants of this arrangement remained even in Maine’s own time. It could be seen even in Western Europe, particularly in a region called Ditmarsh, and also in Switzerland where all land was communally owned by the canton, and the cantons were united in a larger Swiss federation (similar in many ways to the Iroquois league), with no one canton being in charge of all the others. And the Swiss, like the Iroquois, had no standing army.

But the most pervasive examples came from outside Western Europe. Eastern Europe preserved such communities in something closer to their original form. Maine refers to such villages in Sclavonia and Dalmatia (what is today Croatia) and in Russia. He also refers to communities farther afield in India, the Middle East, and Java (modern day Indonesia).

The naturally organised, self-existing, Village-Community can no longer be claimed as an institution specially characteristic of the Aryan races. M. de Laveleye, following Dutch authorities, has described these communities as they are found in Java; and M. Renan has discovered them among the obscurer Semitic tribes in Northern Africa. But, wherever they have been examined, the extant examples of the group suggest the same theory of its origin [as] the Germanic village-community or Mark; ‘This lowest political unit was at first, here (i. e. in England) as elsewhere, formed of men bound together by a tie of kindred, in its first estate natural, in a later stage either of kindred natural or artificial.’

In the end, such village communities were supplanted by the feudal system in Europe. Capitalism developed out of this feudal system:

The Manor or Fief was a social group wholly based upon the possession of land…At this point the notion of common kinship has been entirely lost. The link between Lord and Vassal produced by Commendation is of quite a different kind from that produced by Consanguinity…there would have been no deadlier insult to the lord than to attribute to him a common origin with the great bulk of his tenants. Language still retains a tinge of the hatred and contempt with which the higher members of the feudal groups regarded the lower…There is, in fact, little to choose between villain, churl, miscreant, and boor.

The break-up of the feudal group, far advanced in most European countries, and complete in France and England, has brought us to the state of society in which we live. To write its course and causes would be to re-write most of modern history, economical as well as political…

Maine’s investigations were taken up by others, most notably the Belgian political economist Émile de Laveleye, who wrote a book about the subject called Primitive Property (De la Proprieté et de ses Formes Primitives). In it, he documented copious examples from every corner of the globe confirming that property was at first held in common, and only gradually did it fall under the exclusive ownership of individual families. Then, only much later does it proceed to belong to single, solitary individuals, who then claim “absolute” rights over it in perpetuity, and this process progressed the furthest in the Anglo-Saxon countries. From the introduction to Laveleye’s work by T. E. C. Leslie:

Sir Henry Maine in his lectures at the Middle Temple was, I believe, the first to lay down with respect to landed property the general proposition, afterwards repeated in his Ancient Law, that “property once belonged not to individuals, nor even to isolated families, but to larger societies.”…Sir Henry Maine’s residence for several years in India, had enabled him to collect fresh evidence from existing forms of [Hindu] property and social organization, in support of his original doctrine, that the collective ownership of the soil by communities larger than families, but held together by ties of blood or adoption, was in eastern as well as in western countries the primitive form of the ownership of the soil…To the evidence previously collected by Sir H. Maine and the Danish and German scholars already referred to, [Lavaleye] has added proofs gathered from almost every part of the globe. Ancient Greece and Rome, Medieval France, Switzerland, the Netherlands, Russia, the southern Slav countries, Java, China, part of Africa, Central America, and Peru, are among the regions laid under contribution…

The preponderance of evidence was that, in every corner of the globe, it was not private property, but collective property, that was the primordial form of resource ownership. This has been further confirmed by studies of foraging groups, among whom private property is virtually unknown.

Marx and Engels believed from such evidence that primitive societies were universally matrilineal and matriarchal (a state they called, following an earlier scholar called Bachofen, the Mutterecht, or Mother-right). It was only after societies switched to a patriarchal model, they asserted, that the communal ownership of property became overthrown by individual patriarchal families:

…this revolution – one of the most decisive ever experienced by humanity – could take place without disturbing a single one of the living members of a gens. All could remain as they were. A simple decree sufficed that in the future the offspring of the male members should remain within the gens, but that of the female should be excluded by being transferred to the gens of her father. The reckoning of descent on the female line and the matriarchal law of inheritance were thereby overthrown, and the male line of descent and the paternal law of inheritance were substituted for them.

As to how and when this revolution took place among civilized peoples, we have no knowledge. It falls entirely within prehistoric times. But that it did take place is more than sufficiently proved by the abundant traces of mother-right which have been collected, particularly by Bachofen. How easily it can be accomplished can be seen in a whole series of American Indian tribes, where it has only recently taken place and is still taking place under the influence, partly of increasing wealth and a changed mode of life (transference from forest to prairie), and partly of the moral pressure of civilization and missionaries…

Today, anthropologists tend to regard this as far too simplistic. Not all societies passed through these stages, and looking for universal stages common to all societies is futile. Each case is unique. However, there is a good argument to be made that this is a reasonable description of the progression of societies now broadly classified as Indo-European, which includes much of Europe and India, as well as those of Semitic origin—including most of the ancient Near East.

Subsequent scholarship, by Michael Hudson especially, has shown that rather than surplus-generating activities originating with individual self-starting “entrepreneurs”, such activities began in the public (temple) sector for the benefit of the community. Furthermore, non-commercial debts owed to the public sector were periodically annulled in “debt jubilees” and forfeited land was frequently returned and redistributed.

Hudson counters that [Greece and Rome]…are not actually where our financial system began, and that capitalism did not evolve from bartering, as its ideologues assert. Rather, it devolved from a more functional, sophisticated, egalitarian credit system that was sustained for two millennia in ancient Mesopotamia (now parts of Iraq, Turkey, Kuwait and Iran). Money, banking, accounting and modern business enterprise originated not with gold and private trade, but in the public sector of Sumer’s palaces and temples in the third century B.C. Because it involved credit issued by the local government rather than private loans of gold, bad debts could be periodically forgiven rather than compounding until they took the whole system down, a critical feature that allowed for its remarkable longevity.

The Key to a Sustainable Economy Is 5,000 Years Old (Truthdig)

****

I think you can understand from the above why Marx was so intrigued by the accounts of the Iroquois League in Morgan’s work. The Iroquois nation was a living, breathing example of “primitive communism” in action. Marx took extensive notes on Morgan’s work. After his death, Friedrich Engels organized the notes and published them as The Origin of the Family, Private Property, and the State.

Subsequent scholarship in the late nineteenth century had confirmed that arrangements like those of the Iroquois were not the exception, but seemingly the norm all over the entire world, including Western Europe, which had begun as numerous tribal societies during the fall of the Western Roman Empire.

Thus, the idea that money and absolute private property were permanent, universal, and necessary features of the human condition was clearly contradicted by such accounts. If someone asked for an actual example of communism in action, here was something Marx and his followers could point to as an example of something very much like they were advocating to replace capitalism: collective ownership of the means of production, with economic and political decisions being made by collective deliberation among equals, cooperative labor, and everyone having access to the collective store of what was produced by the whole society. Inheritance was not bequeathed to individual offspring; rather essential resources were redistributed as needed. And leaders were elected and served for as long as they were seen to be able to perform their duties ethically. They could not pass down wealth and titles to their offspring in perpetuity (but their offspring could occupy the same office if the public so desired).

Unless one were to argue that the Iroquois and their culture were somehow unnatural, it would be impossible to deny such a thing was at least, in theory, possible. And if it was possible, the question became how best to make it happen, not whether such a thing could even exist, or whether it was somehow against human nature. As Marx and Engels concluded in Origin:

…once the gens is given as the social unit, we also see how the whole constitution of gentes, phratries, and tribes is almost necessarily bound to develop from this unit, because the development is natural. Gens, phratry, and tribe are all groups of different degrees of consanguinity, each self-contained and ordering it own affairs, but each supplementing the other. And the affairs which fall within their sphere comprise all the public affairs of barbarians of the lower stage.

When we find a people with the gens as their social unit, we may therefore also look for an organization of the tribe similar to that here described; and when there are adequate sources, as in the case of the Greeks and the Romans, we shall not only find it, but we shall also be able to convince ourselves that where the sources fail us, comparison with the American social constitution helps us over the most difficult doubts and riddles.

And a wonderful constitution it is, this gentile constitution, in all its childlike simplicity! No soldiers, no gendarmes or police, no nobles, kings, regents prefects, or judges, no prisons, no lawsuits – and everything takes its orderly course. All quarrels and disputes are settled by the whole of the community affected, by the gens or the tribe, or by the gentes among themselves; only as an extreme and exceptional measure is blood revenge threatened-and our capital punishment is nothing but blood revenge in a civilized form, with all the advantages and drawbacks of civilization.

Although there were many more matters to be settled in common than today – the household is maintained by a number of families in common, and is communistic, the land belongs to the tribe, only the small gardens are allotted provisionally to the households – yet there is not need for even a trace of our complicated administrative apparatus with all its ramifications. The decisions are taken by those concerned, and in most cases everything has been already settled by the custom of centuries. There cannot be any poor or needy – the communal household and the gens know their responsibilities toward the old, the sick, and those disabled in war. All are equal and free – the women included. There is no place yet for slaves, nor, as a rule, for the subjugation of other tribes…And what men and women such a society breeds is proved by the administration inspired in all white people who have come into contact with unspoiled Indians, by the personal dignity, uprightness, strength of character, and courage of these barbarians. p. 52

It’s also worth noting that humans have a capacity for sharing and cooperation as well as well as self-aggrandizing greed, as Burgis notes:

The usual socialist response to what I will call the Human Nature Argument is to question the truth of the premise. Where anti-socialists play up our capacity for selfishness, our cruelty, and our tendency to arrange ourselves into dominance hierarchies, socialists usually emphasize our capacity for cooperation, compassion, and solidarity. So, for example, in G. A. Cohen’s Why Not Socialism? we’re asked to wonder why society shouldn’t be able to function like a camping trip, where families share equipment and resources and whatever else they individually brought, rather than assert their exclusive use over those goods.

Socialism and Human Nature (Arc Magazine)

As we’ve seen, that’s pretty much how humans functioned for most of history. Most transactions are—by their very nature—communistic, as anthropologist David Graeber has often pointed out: When we are working on a project and ask our coworker to hand us a hammer from the toolbox, he doesn’t immediately turn around and ask ‘and what are you going to pay me for it?’.

In fact, as he also points out, corporations themselves are islands of communism floating in a sea of capitalism. I’ve worked for many corporations, and when I ask for a pen from the collective storeroom, I usually I get one. I don’t have to pay to put my food in the refrigerator. I don’t pay the electric bill for my individual computer or telephone. And when I ask a coworker to help me out with something, I don’t have to give him or her money over and above whatever he or she is being paid to be there. Thus, a corporation functions much like the “joint family” or household Maine described.

Similarly, while people do often have a tendency to be greedy and selfish, the degree to which we let them indulge in those impulses is entirely socially determined. Capitalism argues that the rich “deserve” their spoils, and that their hoarding behavior and lust for power will make us all better off in the long run due to the Invisible Hand. That view is getting harder and harder to justify with each passing year, especially as growth slows due to planetary (and other) limits. As Christopher Boehm argues in Hierarchy in the Forest, foragers practice a kind of “reverse dominance hierarchy” designed to keep the arrogant and greedy in check. As he put it, “In short…an apparent absence of hierarchy was the result of followers’ dominating their leaders rather than vice versa…an egalitarian relation between followers and their leader is deliberately made to happen by collectively assertive followers.” (3)

To bring up yet another nineteenth century scholar who hasn’t been mentioned yet, Piotr Kropotkin was a Russian writer of minor nobility who is associated with anarchism. Trained as a biologist, he expected to find a brutal “struggle for survival” everywhere in nature, as he had been taught by the followers of Darwin. Instead, everywhere he looked, he found cooperation to far more pervasive than competition in the realm of biological life:

Kropotkin expected to see the brutal dog-eat-dog world of Darwinian competition. He searched high and low—but nothing. “I failed to find, although I was eagerly looking for it,” Kropotkin wrote, “that bitter struggle for the means of existence, among animals belonging to the same species, which was considered by most Darwinists (though not always by Darwin himself) as the dominant characteristic of the struggle for life, and the main factor of evolution.”

Instead he saw mutual aid—everywhere. “In all these scenes of animal life which passed before my eyes,” Kropotkin wrote, “I saw Mutual Aid and Mutual Support carried on to an extent which made me suspect in it a feature of the greatest importance for the maintenance of life, the preservation of each species and its further evolution.”

And it wasn’t just in animals. The peasants in the villages he visited were constantly helping one another in their fight against the brutal environment of Siberia. What’s more, he noted a correlation between the extent of mutual aid displayed in a peasant village and the distance of that village from the hand of government. It was just as the anarchists had suggested. “I lost in Siberia,” he wrote, “whatever faith in state discipline I had cherished before. I was prepared to become an anarchist.”

The Russian Anarchist Prince Who Challenged Evolution (Slate)

Another example of humans voluntarily cooperating in extreme circumstances is given by Rececca Solnit’s book, A Paradise Built in Hell.

And so, it is very strange indeed that evolutionary biology and psychology have come to be associated with the reactionary right and wielded as weapons in their favor. That human’s evolved tendencies favor Right-wing ideals is a topsy-turvy inversion of what has historically been the nature of the arguments about what is closest to human nature.

****

So the idea that communism is somehow uniquely “unnatural” is a very recent one, and not really consistent with the findings of anthropology. The people most closely studying primitive societies in the nineteenth century were the socialists, anarchists, Marxists, and similar philosophies, not the “Classical Liberals” or capitalists. It’s pretty clear from Adam Smith’s writing that, although he did an outstanding job describing the English mercantile economy of his own day, his historical knowledge was nonexistent. Even what little was known about primitive economies (for example, from accounts of Native Americans) was ignored by Smith, who instead used inductive reasoning to arrive as his incorrect account of primitive barter economies (which anthropologists universally acknowledge never existed). Even today, evolutionary biologists who promote the virtues of Neoliberalism (such as Steven Pinker) deride their intellectual opponents as “Marxists” (or, in the case of Jordan Peterson, as “Post-modern Neo-Marxists”).

When it comes to studying human nature, it was the socialist/anarchist Left, not the laissez-faire Classical Liberals, who were diving into the anthropological literature and grappling with it. In those days, the “human nature” argument was far more associated with the socialist Left than the apostles of capitalist Progress. Far from being unnatural, Marxists would have seen their ideas as conforming to a more natural, egalitarian social order that had been usurped, first by kings and priests, and then by greedy capitalist bosses.

The opponents of socialism and communism, by contrast, had to argue against earlier and more “natural” forms of human social organization being superior to the commercial capitalism of their day. In these cases, primitive societies—presumably comporting far more closely with evolved human nature—were considered to be inferior and poor examples to emulate—commercial Capitalism and markets were simply a “higher” and “more evolved” form of social organization, they asserted.

Really, according to all the evidence, it is capitalism that is unnatural.

****

Now, the real argument against the information above isn’t that it is unnatural. It is this: ideas about how our ancestors lived in small, horticultural communities surrounded by kith and kin centuries or millennia ago are not applicable to modern industrial societies with global trade networks and millions of unrelated strangers interacting on a daily basis.

That’s true, but notice that the argument has been completely flipped on its head! It’s not that socialism is unnatural, it’s that it is incompatible with the unnatural environment we find ourselves in today.

And that’s a very different argument. By this standard, everything about our current way of life is unnatural, and appeals to evolutionary biology and psychology to justify the status quo are deeply flawed from the start.

Not only that, as Burgis points out, proponents of socialism are more concerned with ideas surrounding such mundane issues as ownership, legal rights, wealth and property distribution, and so forth, and it would be preposterous to claim that there is some “natural” form of any of these things that we can discern from a study of evolutionary biology. All of those things are, to one degree or another, recent artifices necessary for modern society to function. Being artifices, we can arrange them as we wish. Evolution—whatever it supposedly decrees—is no impediment.

Even if Pinker and Uygur did have a reasonable objection to a specific vision of socialism put forward by Marx, how is this is supposed to generalize into what Pinker breezily lumps together as “socialism and communism” in general? If “from each…to each” is unrealistic, it hardly follows that public ownership of the means of production is intrinsically unrealistic.

Let’s assume for the sake of argument that Marx was wrong to think that the combination of automation and collective ownership of machines would one day make it unnecessary for human butchers, brewers, and bakers to continue to be given material incentives to produce our dinners. I would argue that we can have socialism and incentives. It’s unlikely that workers in a democratic economy would feel the need to incentivize anyone by paying them 287 times what others were paid — the average pay differential between workers and CEOs last year in the United States — but this doesn’t mean they’d settle on completely flat pay scales either. If anything, they might reverse some of the inequalities we’re accustomed to under capitalism.

Socialism and Human Nature (Arc Magazine)

Perhaps the most basic argument against the fact that the kind of capitalism we’re living under now is somehow compatible with basic human nature is that fact that we’re so disturbed by the current levels of hyper-inequality. When we see the excesses of the super-rich, or how poor people are being taken advantage of, we get angry (those of us who aren’t Libertarians or Neoliberals, anyway). The current levels of anger and frustration at the grotesque levels of inequality present in modern societies should serve as proof that there is nothing “natural” about our current arrangement from an evolutionary standpoint.

Add to that the fact that high levels of inequality are correlated with social instability and pathologies like depression and suicide. Whatever its statistical flaws, the book The Spirit Level does make a good case that high inequality isn’t healthy for societies. And Marx was hardly the first to point this out. From the Gracchi brothers, to Wat Tyler and John Ball, to the Diggers, there has always been resistance to extreme wealth inequality. After all, no one gets disturbed about too much generosity or too much sharing, or a dearth of depression and suicide. Even Thomas Jefferson was disturbed by inequality:

The general spread of the light of science has already laid open to every view the palpable truth, that the mass of mankind has not been born with saddles on their backs, nor a favored few booted and spurred, ready to ride them legitimately, by the grace of God. These are grounds of hope for others…

Ultimately, one could argue that our modern way of life, whatever our political beliefs—capitalism, libertarianism, communism, socialism, anarchism, or whatever—is contrary to human nature, no matter who owns the means of production or how we organize work and leadership. But that’s a whole other can of worms…

(1) The Psychology of Patriotism, The Shepherd Express, July 02, 2019.
(2) Source (Google Books)
(3) Christopher Boehm, et. al., Egalitarian Behavior and Reverse Dominance Hierarchy (PDF)

The Origin of Paper Money 8

US one dollar bill, obverse, series 2009.jpgThis is sort an afterward or corollary to our story of paper money. This describes how we here in the United States ended up with the “Almighty Dollar.” Believe it or not, this was not present after the Revolution, but took a very long time to materialize.

The history of central banking in the U.S. is too much to get into here, but this post from the Minneapolis Fed sums it up probably better than I could: A History of Central Banking in the United States. It’s actually a good summary and a quick read. The WEA Pedagogy blog cited earlier has come out with a second post on the development of central banking in Europe in the nineteenth century: Central Bank History (2/5) 1814-1914 Hundred Years Peace

Believe it or not, at this time, aside from minted coins, there was no national paper currency in the United States. Not only that, but state governments were actually prohibited from issuing their own currency! Instead, as in other Western nations, the government permitted people to use whatever they wanted as currency. Only private banks issued banknotes:

Before the Civil War, the United States was on a bimetallic system with both gold and silver serving as the basis for money. Practically, however, gold was the de facto standard since very little silver coin was in circulation. American currency consisted of bank notes and coin, with the bank notes convertible on demand into specie: paper money was thus “backed” by gold.

The federal government issued no paper currency of its own and there were no federally chartered banks; state banks issued thier own bank notes. Thus in the 1850s the domestic monetary system was comprised of may hundreds of different state banks, each issuing its own paper currency.

This was problematic since the value of a bank note (as opposed to its face value) depended on the financial status of the issuing bank, and, with so many different banks, it was hard to know what money was worth.

Furthermore, there were no deposit insurance programs and no central bank. In contrast to the state banking system, the international gold standard served the United States well since its biggest trading partner and creditor, Great Britain, was also on the gold standard.

The Color of Money and the Nature of Value: Greenbacks and Gold in Postbellum America (American Journal of Sociology) p. 10 (PDF)

In this era, banking was regulated by the states. Despite their importance to the financial system, pretty much anyone could start a bank and issue their own banknotes; all they needed was sufficient startup capital. This was called “free” banking by its proponents, but it acquired another less complimentary nickname: wildcat banking

By action of the state legislatures a bank was held to be not a corporation, which then and for many more years required a charter from the state, but a voluntary association of individuals and thus, like blacksmithing or rope-making, open to anyone. There were rules, most notably as to the hard-money reserves to be held against notes and deposits…But frequently failure to abide by regulations was discovered only after the failure of the bank made the question academic… (Galbraith, p. 88)

Although banks were required to held enough gold reserves to be able to redeem their banknotes in specie on demand, often that could be circumvented just by moving a set amount of gold around the state ahead of the regulators, as in Michigan:

…[I]n Michigan…commissioners were put in circulation to inspect the banks and enforce the requirement [of 30 per cent reserve of gold and silver against note circulation]…Also put in circulation just in advance of the commissioners, was the gold and silver that served as the reserve. This was moved in boxes from bank to bank; when required, the amount was extended by putting a ballast of lead, broken glass and (appropriately) ten-penny nails in the box under a thinner covering of gold coins. p. 88

The results, as one might expect, were not good:

by the time of the Civil War, the American monetary system was, without rival, the most confusing in the long history of commerce and associated cupidity. The coins coming to the Amsterdam bourse were simplicity itself by comparison.

An estimated 7000 different banks notes were, in greater or lesser degree of circulation, the issue of some 1600 different or defunct state banks.

Also, since paper and printing were cheap and the right of note issue was defended as a human right, individuals had gone into the business on their own behalf. An estimated 5000 counterfeit issues were currently in circulation.

No one could do any considerable business without an up-to-date guide that distinguished the wholesome notes from the less good, the orphaned and the bad. A Bank Note Reporter or Counterfeit Detector was essential literature in any significant business enterprise. (Galbraith, p. 90)

It was not a recipe for financial stability, to put it mildly. NPR’s Planet Money describes the situation on the ground during the pre-war era of wildcat banking:

So I’m looking at the Howard Banking Company’s $5 bank note. It’s…a Santa Claus note…You get a picture, I think, of the bank president up in the left-hand corner. And then right in the middle, you get a picture of Santa Claus on a sleigh.

So what basically this note will do is that if you have this note, Santa Claus and all, you’ll go to the Howard Banking Company. And they are obligated to pay you $5 in gold and silver coins if you demand it at their bank.

If you demand it at their bank, but nobody else outside that bank is required to give you gold or silver for the note or, for that matter, even to accept it at all. Sometimes people might choose to take the bill at full face value. Sometimes they might not want it. Sometimes they’ll say, yeah, I’ll accept it but it’s a $5 bill. I’ll give you $4 for it. A dollar bill was not always worth a dollar in this world.

Now, you could argue – and some people do – that this universe of 8,000 different kinds of currency is the free market at work and that this market for bank notes helped keep banks honest. But this world, it did create huge problems for people…

The Birth Of The Dollar Bill (NPR)

What kind of problems did it create?

What if you run a store? What if you run that bar in New York and some guy walks in and gives you some bill that you’ve never seen before? What do you do? Well, that’s when you take out your trusty bank note reporter, this huge book the size of a phone book. This thing, it tells you what bills are in circulation, what they’re supposed to look like and how much they might be worth.

You would take out this big…encyclopedia-looking thing…You’d look in this. You’d find the Howard Banking Company list. It would then tell you where the bank was. And then it would tell you at what discount the note was to be accepted at.

So, for instance, if this was a particularly good bank, $5 note would trade at $5. You, as a bartender, would accept it at that. If it was trading at a discount, it would also say that. If the bank had defaulted, you’d know that. And you’d know that it’s worthless and not to accept it.

And these books, new ones come out every month to keep up with the news. And you have a different book for every city. This is because a bill from, say, a Boston bank might be worth $5 in Boston but only $4 in New York. Usually, the further you get from a bank, the less its money is worth. People’s money loses value just because they’re traveling.

The Birth Of The Dollar Bill (NPR)

In fact, it could be argued that entire monetary history of the United States after the 1830’s was one long clusterfuck. Although the United States would accidentally create, for the first time, a true national paper currency due to the financing needs of the Civil War, it would undermine those innovations after the war by trying to go back on the gold standard. The use of a bimetallic (gold & silver) standard was a sort of compromise  position between gold and fiat. But back then endless debates about the true nature of money (much like today) had no clear resolution:

In the accepted, and it must be added, far from the inspired view of the monetary history of the United States, the years after 1832 were deplorable. Free banking, the resulting bank failures, then greenbacks, agitation for more greenbacks and the pressure, partly successful, for the coinage of cheap silver combined with the recurrent panics to make the financial system of the United States, as Andrew Carnegie held, ‘the worst in the civilized world.’ (Galbraith p. 85)

Greenbacks and National Banks

What brought this madness to an end was, once again, war–this time the United States Civil War. The northern and southern United States split into two separate nations who immediately went to war with each other for control of the North American continent, especially its western expansion.

The industrialized Union needed to raise funds to prosecute its war against the predominantly agrarian Confederacy. because the Confederacy was the source of nearly all the cotton for a massive sector of England’s economy, the South believed that England would intervene on their behalf to put an end to the “cotton famine.” England did not, instead finding other sources of cotton in Egypt and India.

The escalating cost of the war forced the Union government to go off of the international gold standard and–much like in the Revolutionary war—issue fiat money. And the ultimate in fiat money were the greenbacks. They were called this because they were printed in green ink (as is U.S. money today, which is often informally referred to as ‘greenbacks’, but this is something different). The U.S. government issued slips of paper and used them to pay its contractors and suppliers. It declared them to be good for payment of all domestic debts, by law (fiat). And, crucially for MMT concepts, they could be used to settle all debts with the federal government—i.e. taxes, fees and fines. Greenbacks were not good for international debts, customs duties and interest payments on the national debt. Greenbacks were not convertible into gold, and no one claimed that they were. But they were the first true national paper currency, and were useful all across the country (outside of Dixie), unlike the state bank notes. Thus, they began to circulate more broadly throughout the economy.

With the outbreak of the [Civil War], the U.S. economy was severely disrupted, and U.S. government revenues plunged as Union military expenditures soared. The fiscal crisis of the Northern state meant that it could not pay its suppliers and contractors. Moreover, speculators hoarded gold and there was a general liquidity crisis.

By the end of 1861, Northern banks had to suspend convertibility and cease to exchange specie for their bank notes. Thus, the United States went off the gold standard.

The daunting problem facing Salmon Chase, then secretary of the treasury, was a daunting one: how to fund public spending with a huge deficit and an inconvertible currency?

The first step toward a solution began in early 1862 when the government issued greenbacks, inconvertible paper money that by law was legal tender…The government also raised new taxes (including tariffs and an income tax) and borrowed to cover the deficit…By the end of the war, issuance of $450 million of greenbacks had been authorized, and the total currency more than doubled between 1861 and 1866…

Greenbacks were never thought to be a permanent solution, much less a new type of currency for the United States; rather, they were seen as a stopgap measure. It was thought that after the war, they would be replaced by “real” money, that is, money backed by gold:

…this was basically the first dollar bill issued by the U.S. government, though during the war, these dollars, they were not always worth a dollar in gold…So this was not a plan to establish a single national currency. It was a plan to fund war…

[T]he greenbacks…were seen as an emergency thing, something a government would only do in time of war. The underlying belief was that these greenbacks were temporary in the sense that we would issue them, the war would end and that…within 10 years, they’d be gone. The problem was that the consumers kind of liked them. Would you rather use a bill issued by a bank you’re not sure exists or would you want to use a bill that everyone recognizes?

Despite increased taxes and the issuance of greenbacks, the government’s costs continued to soar as the war raged on:

Thus, federal revenues grew thirteenfold…but…the national debt grew from $65 million to 2.7 billion…

To cover this national debt, the government needed to borrow. How would it do that? The usual way for governments to borrow was to sell war bonds. But the Union came up with an ingenious idea to encourage loans to the government: If one bought U.S. government bonds (which is loaning money to the government, remember), then they could use those bonds as backing to open a national bank, as opposed to a state bank. Thus, a system of national banks was created for the first time.

The national banks were different from the state banks in several crucial ways. Most notably, if you had bonds held by the U.S. government, you could issue “official” U.S. government banknotes against them. What was backing these banknotes was, of course, the government’s debt (i.e. the loans to the government). Thus, once gain we see that paper money represents the government’s liabilities. The more government debt, the more money that could be issued. This is basically the MMT conclusion: the government’s debt represents the savings surplus of the private sector.

To encourage loans to the government, a new system of nationally chartered banks replaced the old one of state-chartered banks … national banks … are regulated not by states but by the federal government. These banks are created during the Civil War and they also help raise money for the Union because in order to be a national bank, you had to buy government bonds. In other words, you had to lend money to the government.

The Birth Of The Dollar Bill (NPR)

The National Banking Act of 1863 mandated the creation of national banks whose notes were backed by government bonds. Upon deposit of federal bonds with the comptroller of the currency, investors could establish a private bank and issue national bank notes up to 90% of the value of the bonds….this made it profitable to organize national banks and thus to loan money to the government by purchasing its bonds.

The government also placed a tax on the state bank notes:

Though predictably the state banks opposed the National Bank Act, initially they did not suffer. The suspension of specie payments in 1861 relieved them of the always unwelcome need to redeem their notes in hard cash. The greenbacks … were made legal tender and served instead.

But on 3 March 1865, a mere month before Appomattox, the financial power again asserted itself. Congress was persuaded to pass additional legislation sweeping all state notes away.

A tax of 10 per cent per annum was levied on all state bank issues with effect from 1 July 1866. It was perhaps the most directly impressive evidence in the nation’s history that the power to tax is, indeed, the power to destroy.

Bank failures continued after the banning of the notes and on some years were epidemic – 140 suspensions in 1878; 496 in 1893; 155 in 1908. Most of the casualties were small state banks. For another sixty-five years these continued to be created. and the resulting loans and deposits continued to put or sustain marginal but aspiring farmers and deserving and undeserving entrepreneurs in business. (Galbraith, pp. 91-92)

Between the failures of state banks and the increasing desire to use national banks, the dollars issued by the national banks gradually took over–another manifestation of Gresham’s Law. This proves another point – that the credit that is most favored is that of the sovereign rather than that of other individuals.

Of course, you could say the the government put their thumb on the scale by taxing the state banknotes.This is often depicted by the usual suspects as a diabolical statist plan by the “evil” government to take over the money supply and crush “freedom”. While I couldn’t find the rationale for doing this, one could make the point that since the state banks were not providing funds for the war effort, the tax was an entirely justifiable way to force them to contribute to the war effort in another way.

The net effect was a decline in state banks and an increase in the importance of the national banks:

From 1860 to 1870 the number of state banks declined from 1,579 to 261, while the number of national banks went from zero to 1,612. National bank notes were legal tender in the same way that greenbacks were: payable for all public and private debts, except for import duties and the interest on the national debt. The new national banking system also helped standardize the currency, although that was not its primary intent.

For all its problems, the national banking system fulfilled its original purpose of selling government bonds and funding the war. In addition to would-be bankers, government bonds were sold directly to the public, an innovation promoted by the financier Jay Cooke. By enlisting a large sales force, advertising in the newspapers, and appealing to patriotism, Cooke was able to sell hundreds of millions of dollars worth of bonds.

The Color of Money and the Nature of Value: Greenbacks and Gold in Postbellum America (American Journal of Sociology) p. 10 (PDF)

The United States had established—totally by accident—a consistent national currency for the first time after the Civil War:

So after the Civil War, the only paper money that’s circulating is the greenbacks and the bank notes issued by the national banks. And those bank notes issued by the national banks, they all start to look alike. So in the post-war years, there’s this convergence. Bills are looking more and more uniform. And for the first time, they’re all worth what they say they’re worth…So the United States at this point has kind of accidentally stumbled on an economic innovation – a $10 bill that is worth $10 in New York and in Connecticut and in New Jersey. You can take it all the way to Wyoming and it is still worth $10. And now, finally, if you’re a bartender – life is much easier…

So this really is how we got from a world of 8,000 kinds of money and of monthly guides that tell you if a $5 bill is actually worth $4 to the world basically that we know today, where if somebody gives you a $5 bill, you know it’s worth $5. This makes travel and trade much, much more efficient.

After the war, the debate was, once again, whether or not to go back on the gold standard. But there was a problem: thanks to the war, there was simply too much money floating around. Some way had to be found of reducing the amount of greenbacks in circulation. One enthusiastic goldbug even advocated a weekly bonfire of greenbacks! The problem was that such a policy would be deflationary at at time when farmers and small businessmen were hurting:

Wartime inflation had depreciated greenbacks in relation to gold, and so, for example, the value in gold of $100 of greenbacks averaged $97.00 during February 1862 but was down to $35.00 by January 1864. (I have rounded these numbers to the dollar-CH)

If greenbacks were convertible to gold, enterprising persons could buy them cheaply on the market and “sell” them at face value to the Treasury and thus make a profit. Hundreds of millions of dollars of greenbacks were in circulation, and had the government declared these paper notes convertible into specie, the Treasury would soon have exhausted its entire gold supply.

And so, if there was any hope of making money convertible into gold once again, the first step would have to be a drastic reduction of the amount of greenbacks in circulation. The government attempted to retire the greenbacks, but their timing was terrible as the nation was slipping into another recession. The contraction of the money supply especially hit debtors like small farmers. As a result of popular pressure, the power to retire greenbacks by the treasury Secretary was rescinded; only about $48 million were retired.

Resistance to…hard money policy raised the larger question or whether a return to gold was desirable and whether resumption was a suitable goal for monetary policy. Related to this was the issue of how to repay the national debt, for although the law required that interest be paid in coin (and not in greenbacks), it was unclear whether the principal also had to be repaid in coin.

To do so seemed particularly unfair to some since in the darkest days of the war the government sold its bonds for greenbacks. As a result, speculators could buy, for example, $100 in greenbacks on the market with only $35.00 in gold, and with those greenbacks purchase $100 in government bonds, hoping that both the interest and principal would be repaid in gold.

The Color of Money and the Nature of Value: Greenbacks and Gold in Postbellum America (American Journal of Sociology) p. 10 (PDF)

But why retire the greenbacks at all? As Benjamin Franklin had pointed out long ago, an increase in the money supply increases economic activity. And constraining money creation with something like the gold standard severely limited the amount of money that could be issued by government, much as the hated British had done to the colonies back before the Revolution.

The debate over the gold standard shaped up along economic lines:

Farmers and other agrarian groups were heavily reliant on credit and so were hurt when money became tight. Thus, the ideas of the greenbackers found a receptive audience in American farmers who would later support the Populist movement. Greenbackism also enjoyed a brief popularity among mercantile traders, insurance brokers, capitalists of the Pennsylvania steel industry, and certain sectors of the labor movement.

Bullionism, on the other had, found support among bankers, financiers, bondholders and importers, all of whom stood to lose from inflation and gain from an appreciation of the currency.

The Color of Money and the Nature of Value: Greenbacks and Gold in Postbellum America (American Journal of Sociology) p. 10 (PDF)

In the 1870’s, the “hard money” position won out and the United States went back on the gold standard. The result was a general deflation and a series of painful crashes and depressions, including the “Long Depression” (which was called the “Great Depression” prior to 1929). Between the end of the Civil War and the Panic of 1907, there were a large number of financial panics, crises, depressions, etc. You can go visit Crisis Chronicles on the Liberty Street (NY Fed) blog to read about them in more detail. In brief, here are some of the major ones:

– Panic of 1873, a US recession with bank failures, followed by a four-year depression
– Panic of 1884
– Panic of 1890
– Panic of 1893, a US recession with bank failures
– Panic of 1896
– Panic of 1901, a U.S. economic recession that started a fight for financial control of the Northern Pacific Railway
– Panic of 1907, a U.S. economic recession with bank failures.

List of economic crises (Wikipedia)

The banking panic of 1907 eventually led to the creation of the Federal Reserve System, but that’s a story for another time.

The Greenback Legacy

By allowing the government to increase the money supply without worrying about convertibility to gold or increasing debt, the economy improved. After the war, many believed that the government should continue to do this: they even formed a Greenback Party to advocate for this.

Even today, some have made an argument for such a policy to be revived. Ellen Brown has written about this on her Web of Debt blog:

Helicopter money is a new and rather pejorative term for an old and venerable solution. The American colonies asserted their independence from the Motherland by issuing their own money; and Abraham Lincoln, our first Republican president, boldly revived that system during the Civil War. To avoid locking the government into debt with exorbitant interest rates, he instructed the Treasury to print $450 million in US Notes or “greenbacks.” In 2016 dollars, that sum would be equivalent to about $10 billion, yet runaway inflation did not result. Lincoln’s greenbacks were the key to funding not only the North’s victory in the war but an array of pivotal infrastructure projects, including a transcontinental railway system; and GDP reached heights never before seen, jumping from $1 billion in 1830 to about $10 billion in 1865.

Indeed, this “radical” solution is what the Founding Fathers evidently intended for their new government. The Constitution provides, “Congress shall have the power to coin money [and] regulate the value thereof.” The Constitution was written at a time when coins were the only recognized legal tender; so the Constitutional Congress effectively gave Congress the power to create the national money supply, taking that role over from the colonies (now the states).

Outside the Civil War period, however, Congress failed to exercise its dominion over paper money, and private banks stepped in to fill the breach. First the banks printed their own banknotes, multiplied on the “fractional reserve” system. When those notes were heavily taxed, they resorted to creating money simply by writing it into deposit accounts. As the Bank of England acknowledged in its spring 2014 quarterly report, banks create deposits whenever they make loans; and this is the source of 97% of the UK money supply today. Contrary to popular belief, money is not a commodity like gold that is in fixed supply and must be borrowed before it can be lent. Money is being created and destroyed all day every day by banks across the country. By reclaiming the power to issue money, the federal government would simply be returning to the publicly-issued money of our forebears, a system they fought the British to preserve.

Trump’s $1 Trillion Infrastructure Plan: Lincoln Had a Bolder Solution (Web of Debt)

And just like back in the late 1800’s, such ideas are opposed by a radical conservative movement that wishes to go back to free (wildcat) banking, abolish the central bank altogethet (just like Andrew Jackson), and go back on the gold standard. One prominent example is Gary North:

Ellen Brown is the latest in a long line of pro-fiat money, anti-gold currency, monetary statists who have infiltrated the conservative movement. They have accomplished this for over 50 years by the tactic of wrapping themselves in a flag of opposition to the Federal Reserve System. I call them false-flag infiltrators.

False-flag infiltrators are remnants of a left-wing American political movement of the late 19th century: the Greenbackers, named after the green currency issued by the North during the Civil War. These paper bills were unbacked by gold. Consumer prices rose by 75%, 1861-65.

The Greenbackers were opposed to the gold standard because it kept prices low. They wanted the government to inflate the currency, so that debtors could pay off their debts with cheap money. They had a small political party for almost two decades, the Greenback Party. It 1878, it merged with a labor Party to become the Greenback Labor Party. It went out of existence after 1888. Its main leader, James Weaver, co-founded the Populist (People’s) Party in 1891. It was a farm-bloc party that promoted fiat money in order to let farmers pay their debts with cheap money and also because they thought inflation would raise farm products’ prices more than the prices of other goods.

There was never any question of the Greenbackers’ politics. They were leftists, and openly sided with government controls on the economy…

https://www.garynorth.com/public/department141.cfm

Of course, today paper isn’t used very much anymore, and neither are coins. I’m sure most of us have used plastic debit cards and credit cards (hopefully paying it off immediately) to buy things. Money is now primarily electronic transfers between computer spreadsheets maintained by banks. Virtual money.

Conclusions

So what (if anything) have we learned about paper money?

Well, the first lesson, to me, is that paper money was inevitable. There was simply no amount of gold and silver anywhere in the entire world to run the kind of economy we were entering after 1700, much less today. All of the gold ever mined in human history would fit into a cube 20 meters on a side. Such a cube could easily fit on one of those giant supertankers bringing our goods over from China many times over.

When gold and silver was not enough, the quest was on for another thing to back paper money. Land was the first idea, as we’ve seen. Eventually paper money money became backed by the sovereign’s debt. Paper money became circulating government liabilities. It still is today.

Paper money was invented independently in many places, indicating to me that it must have been an inevitability. The Chinese were the first to do it, and used it on and off for about 500 years (c. 950 to 1455). Ironically, they stopped using it just at the time when the New World was being colonized by Europeans, which sent massive amounts of silver their way. But soon it was invented independently in the North Atlantic trading economies, which went on to dominate the rest of the world.

Long before “official” paper money was issued by central governments, there were things like bills of exchange, municipal bonds and annuities, and shares in joint-stock corporations. That is, much of the value transfer in the European economy was already done via paper instruments and ledgers long before paper money came on the scene, and so it’s inevitable that people made the leap. They just had to get over the cultural baggage of “money” being simply a substitute for precious metals—baggage that even today many people hold on to. Money creation was done by private banks via the stroke of a pen long before governments began issuing it. Even today, banks create most of the money in the world via keystrokes.

The first paper money were simply IOUs. Such IOU’s were issued by private individuals before governments. Eventually the government took over and issued their own IOUs, which soon changed hands as money. This is because the government’s credit is usually the most reliable. This feeds into ideas that there is a “pyramid” of what constitutes money, with the government’s money as the ultimate means of settlement at the apex.

The faith in paper money is based on the stability of the government issuing it. Just like the credit of an individual determines how much faith people have in an IOU, so too with governments. The IOUs of Zimbabwe or Venezuela are very different than those of Japan or Switzerland. Throughout history, problems with money are almost always related to political crises, and not simply to “debasement”. Debasement of currency is usually more of a signal of decline than a cause.

Another point is one I’ve made repeatedly: every major financial innovation has been created in order help states fund wars. That is, financial innovations were ways for states to aggregate and control the resources necessary to wage war. They were then later pressed into the service of trade and technological development. Paper money is a prime example of this. No country in history has decided to simply lose a war or surrender to a foreign power in order to limit their money supply or adhere faithfully to a precious metal standard.

Paper money was an instrument of revolution. How much has the modern world been affected by its creation? Would there even be a modern world without it? Without the ability to issue money in excess of the gold and silver floating around, neither the American nor French Revolutions would have succeeded. These paved the way for everything that came after—both socially and politically.

Is there a danger of uncontrollable inflation with paper money? Sure. But incomprehensibly more damage is done to the average citizen by locking your financial system into artificial handcuffs. Such systems are designed exclusively for the benefit of international merchants (who wish to have the money they make in one country’s market have the same value in their home country), while at the same time causing terrible harm to domestic populations who suffer from the resulting depressions.

As Tim Harford puts it, “While we may not always be able to trust central bankers to print just the right amount of new money, it probably makes more sense than trusting miners to dig up just the right amount of new gold.” He adds, “mainstream economists generally now believe that pegging the money supply to gold is a terrible idea. Most regard low and predictable inflation as no problem at all – perhaps even a useful lubricant to economic activity.”

In fact, it is a lack of inflation which makes debts harder to pay off. Hard-money policy has always been advocated by bankers and other big creditors, and yet the reactionary Right has somehow made this into a populist cause. Incredible!

This Reddit comment succinctly sums up the problems with tying money to gold, or anything else (I have combined two comments here):

1. Gold is scarce, and not enough of it is being found to match the increasing GDP of the world. Therefore it’s deflationary, and tends to entice people to hoard it, because simply owning the wealth makes you even wealthier as demand for it increases dramatically over supply. This chokes your economy as the poor fight over scraps and the rich do nothing. Leads to severe deflationary recessions every 5-10 years.

2. Finding a large cache of gold can royally fuck up an economy as wealth plummets overnight. There is literally nothing but luck controlling this. There’s an instance in history, of an African warlord who made a great trek to the middle east, giving out vast supplies of his wealth … during the trip, because he had so much of it, it was useless to him. He ended up crashing nearly every economy he passed through. (Mansa Musa)

3. Gold is actually very abundant on earth, but not easily accessible. There’s trillions of tons in our core, for example. And billions of tons dissolved in our oceans. Right now, gold isn’t worth the effort to extract it; but make it the driver of the worlds largest economy and people will start working on technology to get the gold out of the ocean. Pretty soon the economy will crash again.

4. In the distant future, all materials will be able to be produced in fusion furnaces. All elements were created from fusion of hydrogen during supernovas, and once fusion technology gets to the right place, we’ll be able to make any stable element we want. Gold, at that point, will become infinite. It is for this reason that we’d better figure out how to perfect fiat currencies, because we’re at the precipice of a demand-less society. One could make the argument that energy then becomes the chief currency of the world (as if it already isn’t!), but our sun provides more than enough for our civilization a billion times over.

Bottom line; basing currency on physical items is dumb and constrains us too much. It leads to crashes and gives us absolutely no ability to prevent them from happening. It’s dumb.

People … forget the single most obvious answer, which also happens to be a major reason we ditched it in the first place: it would allow any other country of sufficient size (read: China, Japan, Britain, the EU, and possibly Russia, Saudi Arabia, Australia due to their hard asset base(s)) to manipulate our currency by flooding/hoarding gold on the open markets.

Does anyone, who seriously understands international finance, think that if we handed the keys to our economy to China, that they wouldn’t use it to their ends? People that are pro-gold are a special breed of stupid.

It’s also worth noting that persistently high inflation is almost always due to other boneheaded things that governments do above and beyond simply “money printing.” In the Unites States in the sixties and seventies, we decided to ramp up a war on the other side of the world because of the “domino effect.” And we didn’t raise taxes to pay for it. We also built our entire economy around a substance whose supply ended up being controlled by countries who hated us (i.e. oil). Boneheaded!

All of this goes to say that given the history of paper money, the description of how our money system works by MMT seems basically correct. And, as MMT advocates point out, it is descriptive more than prescriptive. However, a proper understanding of what money is and how it works in modern industrial economies does lead to some policy recommendations. Perhaps the simplest one is this: any politician who claims that “we can’t afford it” should be voted out of office immediately. Another is that the low-information voters who claim to be “socially liberal but economically conservative” are just middle-of-the-road dimwits who haven’t a clue about how money and government finance work and should have their voting rights revoked.

Sources

I’ve mentioned some of the sources I’ve used throughout this series of posts, but I thought I’d list them here for convenience.

The major sources were Money: Whence it Came, Where it Went by John Kenneth Galbraith, originally published in 1975, with a revised version released in 1995 which is the the one I used. The other is The History of Money by Jack Weatherford, which came out in 1998. This one is a bit outdated, and ignores the insights of more in-depth and recent works in monetary theory.

One particularly notable one is this one: Money: 5,000 years of Debt and Power by Michel Aglietta. This is from Verso and was originally published in French.

Another major source is this article from The New Yorker: The Invention of Money

Crisis Chronicles is a series of posts by the New York Federal Reserve on its blog Liberty Street Economics: https://libertystreeteconomics.newyorkfed.org/crisis-chronicles/

There is a good summary of the various chapters of Glyn Davies’ History of Money on this site: http://projects.exeter.ac.uk/RDavies/arian/llyfr.html

The WEA Pedagogy Blog is doing a 5-part series on the history of central banking (also accessible on the Real World Economic Review blog). Part one is available here: https://weapedagogy.wordpress.com/2019/03/31/origins-of-central-banking/

The Minneapolis Fed has good history of central banking in the United States as noted above, and is available here: A History of Central Banking in the United States.

I’ve also used the following PDFs: Benjamin Franklin and the Birth of a Paper Money Economy by Owen F. Humpage

Paper Money and Inflation in Colonial America by Owen F. Humpage

The Color of Money and the Nature of Value: Greenbacks and Gold in Postbellum America by Bruce G. Carruthers and Sarah Babb

Land Bank Proposals 1650-1705 by Charlie Landale

If podcast are your preferred medium, these two are good. The first is Tim Harford’s recounting of the birth of paper money in China in his podcast 50 things that made the modern economy: https://www.bbc.co.uk/programmes/p059c7g1

And NPR’s Planet Money podcast did a podcast on the birth of the dollar bill after the Civil War: Episode 421: The Birth Of The Dollar Bill

The Origin of Paper Money 7

It’s here that we finally get to what’s really the heart of this entire series of posts, which is this: in the West, paper money has been an instrument of revolution.

Both the American and French Revolutions were funded via paper money, and it’s very likely they could not have succeeded without it. It allowed new and fledgling regimes to command necessary resources and fund their armies, which allowed them to take on established states. While such states have mints, a tax base, ownership of natural resources, the ability to write laws, etc.; a rebellion against an established order has none of these things. So, to raise funds, the ability to issue IOUs as payment makes being able to start a revolution far more likely. As we’ve already seen, just about every financial innovation throughout history came about due to the costs of waging wars. Paper money was no exception.

One might even go so far as to say that the American, French and Russian Revolutions would never have been able to happen at all without the invention of paper money!

Washington Crossing the Delaware by Emanuel Leutze, MMA-NYC, 1851.jpg

1. The United States

Earlier we looked at the financial innovations that the colonies undertook to deal with the lack of precious metals in circulation. Wherever paper money and banks had been created, commerce and prosperity increased.

Then it all came to a screeching halt.

The British government passed laws which forbade the issuing and circulation of paper money in the colonies. The monetary experiments came to an end. As you might expect, the domestic economy shrank, and commerce was severely constricted. Of course, the colonists became quite angry at this turn of events.

British authorities initially viewed colonial paper currency favorably because it supported trade with England, but following New England’s “great inflation” in the 1740s, this view changed. Parliament passed the Currency Act of 1751 to strictly limit the quantity of paper currency that could be issued in New England and to strengthen its fiscal backing.

The Act required the colonies to retire all existing bills of credit on schedule. In the future, the colonies could, at most, issue fiat currencies equal to one year’s worth of government expenditures provided that they retired the bills within two years. During wars, colonies could issue larger amounts, provided that they backed all such issuances with taxes and compensated note holders for any losses in the real value of the notes, presumably by paying interest on them.

As a further important constraint on the colonies’ monetary policies, Parliament prohibited New England from making any fiat currency legal tender for private transactions. In 1764, Parliament extended the Currency Act to all of the American colonies.

Paper Money and Inflation in Colonial America (Owen F. Humpage, Economic Commentary, May 13, 2015)

To get around the prohibition on governments issuing paper notes as IOUs, banking may have filled the void. But that option was also cut off by the British government. Last time we saw that the South Sea Bubble, along with a panoply of related schemes, had nearly taken down the entire British economy (as it had done in France). In response, Parliament passed the Bubble Act, which forbade any chartered corporations except those expressly authorized by a Royal Charter. This effectively put the kibosh on banking as an alternative source of paper money in the American colonies.

Given their instinct for experiment in monetary matters, it would have been surprising if the colonists had not discovered or invented banks. They did, and their enthusiasm for this innovation would have been great had it not also been systematically curbed.

In the first half of the eighteenth century the New England colonies, along with Virginia and South Carolina, authorized banking institutions. The most famous, as also the most controversial of these, was the magnificently named Land Bank Manufactory Scheme of Massachusetts which, very possibly, owed something to the ideas of John Law.

The Manufactory authorized the issue of bank notes at nominal interest to subscribers to its capital stock – the notes to be secured, more or less, by the real property of the stockholders. The same notes could be used to pay back the loan that their issue had incurred. This debt could also be repaid in manufactured goods or produce, including that brought into existence by the credit so granted.

The Manufactory precipitated a bitter dispute in the colony. The General Court was favorable, a predisposition that was unquestionably enhanced by the award of stock to numerous of the legislators. Merchants were opposed. In the end, the dispute was carried to London.

In 1741, the Bubble Acts – the British response, as noted, to the South Sea Company and associated promotions and which outlawed joint-stock companies not specifically authorized by law – were declared to apply to the colonies. It was an outrageous exercise in post-facto legestlation, one that helped inspire the Constitutional prohibition against such laws. However, it effectively ended the colonial banks. (Galbraith, pp. 56-57)

Benjamin Franklin, as we have seen, was a longstanding advocate of paper money. He wrote treatises on the subject, and even printed some of it on behalf of the government of Pennsylvania. It was this paper money, he argued, that was the cause of the colonies’ general prosperity in contrast to the widespread poverty and discontent he witnessed everywhere in England:

Before the war, the colonies sent Benjamin Franklin to England to represent their interests. Franklin was greatly surprised by the amount of poverty and high unemployment. It just didn’t make sense, England was the richest country in the world but the working class was impoverished, he wrote “The streets are covered with beggars and tramps.”

It is said that he asked his friends in England how this could be so, they replied that they had too many workers. Many believed, along with Malthus, that wars and plague were necessary to rid the country from man-power surpluses.

“We have no poor houses in the Colonies; and if we had some, there would be nobody to put in them, since there is, in the Colonies, not a single unemployed person, neither beggars nor tramps.” – Benjamin Franklin

He was asked why the working class in the colonies were so prosperous.

“That is simple. In the Colonies, we issue our own paper money. It is called ‘Colonial Scrip.’ We issue it in proper proportion to make the goods and pass easily from the producers to the consumers. In this manner, creating ourselves our own paper money, we control its purchasing power and we have no interest to pay to no one.” – Benjamin Franklin

Soon afterward, the English bankers demanded that the King and Parliament pass a law that prohibited the colonies from using their scrip money. Only gold and silver could be used which would be provided by the English bankers. This began the plague of debt based money in the colonies that had cursed the English working class.

The first law was passed in 1751, and then a harsher law was passed in 1763. Franklin claimed that within one year, the colonies were filled with unemployment and beggars, just like in England, because there was not enough money to pay for the goods and work. The money supply had been cut in half.

Hidden History: According to Benjamin Franklin, the real reason for the Revolutionary War has been hid from you (Peak Prosperity)

A good comment to the above article notes other factors which were also at work:

The timing of the shift in British policy toward colonial scrip (1763) also encompasses…the end of the Seven Years’ War, better known in the United States as the French and Indian War.

William Pitt’s prosecution of the war was conducted by running up government debt, and the settlement of this debt after the war’s conclusion required the raising of taxes by Parliament. Since, from Britain’s view, the war had been fought in order to protect its colonies, it felt that it was only fair that the colonies bore some of the financial burden. Colonial scrip was useless to Parliament in this regard, as was barter. The repayment of British lenders to the Crown could only be done in specie.

The colonies, as you correctly pointed out, did not have this in any significant quantity, although in the view of British authorities this was the colonies’ problem and not theirs. This policy also came on the heels of the approach of benign neglect conducted by Robert Walpole as Prime Minister, under which the colonies were allowed to do pretty much as they pleased so long as their activities generally benefited the British Crown. It should also be noted here that demands of payment of taxes in hard currency is a common tactic for colonial powers to undermine local economies and customs. It played that role in fomenting the American Revolution as well as the Whiskey Rebellion of the new Constitutional republic, not to mention how it was used in South Africa to compel natives participating in a traditional economy to abandon their lands and take up work as laborers in the gold mines.

Hidden History: According to Benjamin Franklin, the real reason for the Revolutionary War has been hid from you (Peak Prosperity)

Now, it would be unreasonable to say that this was THE cause of the American Revolution. In school, we’re taught that that taxes were the main cause: “No taxation without representation” went the slogan (and precipitated the Boston Tea Party). We’re also told that the colonists were much aggrieved by high customs duties, such as those of the unpopular Stamp Act.

But the suppression of paper money and local currency issuance by the British government appears to have been just as much of a cause, although it is probably unknown by the vast majority of Americans. The reason for this strange omission is unexplained. Galbraith thinks that that more conservative attitudes towards money creation in modern times have caused even American historians to argue that the British authorities were largely correct in their actions!

English historian, John Twells, wrote about the money of the colonies, the colonial Scrip:

“It was the monetary system under which America’s Colonies flourished to such an extent that Edmund Burke was able to write about them: ‘Nothing in the history of the world resembles their progress. It was a sound and beneficial system, and its effects led to the happiness of the people.

In a bad hour, the British Parliament took away from America its representative money, forbade any further issue of bills of credit, these bills ceasing to be legal tender, and ordered that all taxes should be paid in coins. Consider now the consequences: this restriction of the medium of exchange paralyzed all the industrial energies of the people. Ruin took place in these once flourishing Colonies; most rigorous distress visited every family and every business, discontent became desperation, and reached a point, to use the words of Dr. Johnson, when human nature rises up and assets its rights.”

Peter Cooper, industrialist and statesman wrote:

“After Franklin gave explanations on the true cause of the prosperity of the Colonies, the Parliament exacted laws forbidding the use of this money in the payment of taxes. This decision brought so many drawbacks and so much poverty to the people that it was the main cause of the Revolution. The suppression of the Colonial money was a much more important reason for the general uprising than the Tea and Stamp Act.”

Our Founding Fathers knew that without financial independence and sovereignty there could be no other lasting freedoms. Our freedoms and national sovereignty are being lost because most people do not understand our money system…

Hidden History: According to Benjamin Franklin, the real reason for the Revolutionary War has been hid from you (Peak Prosperity)

If paper money was the cause of the American Revolution, it was also the solution. The Continental Congress issued IOUs to pay for the war – called ‘Continental notes’ or ‘Continental scrip’:

With independence the ban by Parliament on paper money became, in a notable modern phrase, inoperative. And however the colonies might have been moving towards more reliable money, there was now no alternative to government paper…

Before the first Continental Congress assembled, some of the colonies (including Massachusetts) had authorized note issues to pay for military operations. The Congress was without direct powers of taxation; one of its first acts was to authorize a note issue. More states now authorized more notes.

It was by these notes that the American Revolution was financed….

Robert Morris, to whom the historians have awarded the less than impeccable title of ‘Financier of the Revolution’, obtained some six-and-a-half million dollars in loans from France, a few hundred thousand from Spain, and later, after victory was in prospect, a little over a million from the Dutch. These amounts, too, were more symbolic than real. Overwhelmingly the Revolution was paid for with paper money.

Since the issues, Continental and state, were far in excess of any corresponding increase in trade, prices rose – at first slowly and that, after 1777, at a rapidly accelerating rate…Eventually, in the common saying, ‘a wagon-load of money would scarcely purchase a wagon-load of provisions’. Shoes in Virginia were $5000 a pair in the local notes, a full outfit of clothing upwards of a million. Creditors sheltered from their debtors like hunted things lest they be paid off in worthless notes. The phrase ‘not worth a Continental’ won its enduring place in American language. (Galbraith, pp. 58-59)

Despite this painful bout of hyperinflation, as Galbraith notes, there was simply no other viable alternative to fund the Revolutionary War at the time:

Thus the United States came into existence on a full tide not of inflation but of hyperinflation – the kind of inflation that ends only in the money becoming worthless. What is certain, however, is the absence of any alternative.

Taxes, had they been authorized by willing legislators on willing people, would have been had, perhaps impossible to collect in a country of scattered population, no central government, not the slightest experience in fiscal matters, no tax-collection machinery and with its coasts and numerous of its ports and customs houses under enemy control.

And people were far from willing. Taxes were disliked for their own sake and also identified with foreign oppression. A rigorous pay-as-you-go policy on the part of the Continental Congress and the states might well have caused the summer patriots (like the monetary conservatives) to have second thoughts about the advantages of independence.

Nor was borrowing an alternative. Men of property, then the only domestic source, had no reason to think the country a good risk. The loans from France and Spain were motivated not by hope of return but by malice towards an ancient enemy.

So only the notes remained. By any rational calculation, it was the paper money that saved the day. Beside the Liberty Bell there might well be a tasteful replica of a Continental note. (Galbraith, p. 60)

While this is often used as yet another cautionary tale of “government money printing” by libertarians and goldbugs, a couple of things need to be noted. The first, and most obvious is the fact that: without government money printing there would be no United States. That seems like an important point to me.

The second is a take from Ben Franklin himself. He argued that inflation is really just a sort of tax by another name. And, as opposed to “conventional” government taxation, the inflationary tax falls more broadly across the population, meaning that it was actually a more even-handed and fair method of taxation!

And you can kind of see his point. With legislative taxes, government always has to decide who and what to tax—and how much. This inevitably means that the government picks winners and losers by necessity. Sometimes this can be done wisely, but in practice it often is not. But an inflationary tax cannot be easily controlled by government legislation to favor privileged insiders, unlike more conventional methods of direct taxation, where the rich and well-connected are often spared much of the burden thanks to undue influence over legislators:

From 1776 to 1785 Franklin serves as the U.S. representative to the French court. He has the occasion to write on one important monetary topic in this period, namely, the massive depreciation of Congress’ paper money — the Continental dollar — during the revolution. In a letter to Joseph Quincy in 1783, Franklin claims that he predicted this outcome and had proposed a better paper money plan, but that Congress had rejected it.

In addition, around 1781 Franklin writes a tract called “Of the Paper Money of America.” In it he argues that the depreciation of the Continental dollar operated as an inflation tax or a tax on money itself. As such, this tax fell more equally across the citizenry than most other taxes. In effect, every man paid his share of the tax according to how long he retained a Continental dollar between the time he received it in payment and when he spent it again, the intervening depreciation of the money (inflation in prices) being the tax paid.

Benjamin Franklin and the Birth of a Paper Money Economy (PDF; Philidelphia Fed)

I’m not sure that many people would agree with that sentiment today, but it is an interesting take on the matter.

Once the war was won, and with the Continental notes inflating to zero, the new fledgling government could now issue money for real. The first government building constructed by the new government was the mint. The power to tax was authorized by Congress.

Although the war ended in 1783, the finances of the United States remained somewhat chaotic through the 1780s. In 1781, successful merchant Robert Morris was appointed superintendent of finance and personally issued “Morris notes”—commonly called Short and Long Bobs based on their tenure or time to maturity—and thus began the long process to reestablish the government’s credit.

In 1785, the dollar became the official monetary unit of the United States, the first American mint was established in Philadelphia in 1786, and the Continental Congress was finally given the power of taxation to pay off the debt in 1787, thus bringing together a more united fiscal, currency, and monetary policy.

Crisis Chronicles: Not Worth a Continental—The Currency Crisis of 1779 and Today’s European Debt Crisis (Liberty Street)
One of the more common silver coins used all over the world at this time was the Maria Theresa thaler (or taler), issued by the Holy Roman Empire from its silver mines in Joachimsthal, hence the name (today the town of Joachimsthal is known as Jáchymov and is located in the Czech Republic).

“Taler” became a common name for currency because so many German states and municipalities picked it up. During the sixteenth century, approximately 1,500 different types of taler were issued in the German-speaking countries, and numismatic scholars have estimated that between the minting of the first talers in Jáchymov and the year 1900, about 10,000 different talers were issued for daily use and to commemorate special occasions.

The most famous and widely circulated of all talers became known as the Maria Theresa taler, struck in honor of the Austrian empress at the Gunzberg mint in 1773…The coin…became so popular, particularly in North Africa and the Middle East that, even after she died, the government continued to mint it with the date 1780, the year of her death.

The coin not only survived its namesake but outlived the empire that had created it. In 1805 when Napoleon abolished the Holy Roman Empire, the mine at Gunzberg closed, but the mint in Vienna continued to produce the coins exactly as they had been with the same date, 1780, and even with the mintmark of the closed mint. The Austro-Hungarian government continued to mint the taler throughout the nineteenth century until that empire collapsed at the end of World War I.

Other countries began copying the design of the Maria Theresa taler shortly after it went into circulation. They minted coins of a similar size and put on them a bust of a middle-aged woman who resembled Maria Theresa. Of they did not have a queen of their own who fit the description, they used an allegorical female such as the bust of Liberty that appeared on many U.S. coins of the nineteenth century.

The name dollar penetrated the English language via Scotland. Between 1567 and 1571, King James VI issued a thirty-shilling piece that the Scots called the sword dollar because of the design on the back of it. A two-mark coin followed in 1578 and was called the thistle dollar.

The Scots used the name dollar to distinguish their currency, and thereby their country and themselves, more clearly from their domineering English neighbors to the south. Thus, from very early usage, the word dollar carried with it a certain anti-English or antiauthoritarian bias that many Scottish settlers took with them to their new homes in the Americas and other British colonies. The emigration of Scots accounts for much of the subsequent popularity of the word dollar in British colonies around the world… (Weatherford, History of Money, pp. 115-116)

In 1782, Thomas Jefferson wrote in his Notes on a Money Unit of the U.S. that “The unit or dolar is a known coin and most familiar of all to the mind of the people. It is already adopted from south to north.”

The American colonists became so accustomed to using the dollar as their primary monetary unit that, after independence, they adopted it as their official currency. On July 6, 1785, the Congress declared that “the money unit of the United States of America be one dollar.” Not until April 2, 1792, however, did Congress pass a law to create an American mint, and only in 1794 did the United States begin minting silver dollars. The mint building, which was started soon after passage of the law and well before the Capitol or White House, became the first public building constructed by the new government of the United States. (Weatherford, History of Money, p. 118)

In the nineteenth century, there were strong arguments around the establishment of a central bank in the United States. One was, in fact, chartered, and then its charter was later revoked. We’ll talk a little about this in the final entry of this series next time, but for now, it is beyond the scope of this post.

Scene from the French Revolution

2. France

In the late eighteenth century, France’s financial circumstances were still very dire. It constantly needed to raise money for its perennial wars with England who, as we saw earlier, successfully funded its own wars with paper money and state borrowing via the Bank of England an—option not available to France in the wake of the Mississippi Bubble’s collapse and the failure of John Law’s Banque Royale. France’s generous loan to the United States’ revolutionaries may have been well appreciated by us Americans, but in retrospect, it was probably not the best move considering France’s fiscal situation (plus the fact that Revolution would soon engulf it; something the French aristocracy obviously had no way of knowing at the time).

In the aftermath of the Revolution, the National Assembly repudiated the King’s debts. It also suspended taxation. But it still badly needed money, especially since many of the countries surrounding France (e.g. Austria, Prussia, Great Britain, Spain and several other monarchies) declared war on it soon after the King met the guillotine. The answer they came up with was, once again, monetizing land. In this case, it was the land seized from the Catholic Church by the Revolutionary government. “[T]he National Assembly agreed that newly nationalised properties in the form of old church land could be purchased through the use of high-denomination assignats, akin to interest-bearing government bonds, mortgaged (assignée) on the property.”

The Estates-General had been summoned in consequence of the terrible fiscal straits of the realm. No more could be borrowed. There was no central bank which could be commanded to take up loans. All still depended on the existence of willing lenders or those who could be apprehended and impressed with their duty.

The Third Estate could scarcely be expected to vote new or heavier levies when its members were principally concerned with the regressive harshness of those then being collected. In fact, on 17 June 1789 the National Assembly declared all taxes illegal, a breathtaking step softened by the provision that they might be collected on a temporary basis.

Meanwhile memories of John Law kept Frenchmen acutely suspicious of ordinary paper money; during 1788, a proposal for an interest-bearing note issue provoked so much opposition that it had to be withdrawn. But a note issue that could be redeemed in actual land was something different. The clerical lands were an endowment by Heaven of the Revolution.

The decisive step was taken on 19 December 1789. An issue of 400 million livres was authorized; it would, it was promised, ‘pay off the public debt, animate agriculture and industry and have the lands better administered’. These notes, the assignats, were to be redeemed within five years from the sale of an equivalent value of the lands of the Church and the Crown.

The first assignats bore interest at 5 per cent; anyone with an appropriate amount could use them directly in exchange for land. In the following summer when a new large issue was authorized, the interest was eliminated. Later still, small denominations were issued.

There were misgivings. The memory of Law continued to be invoked. An anonymous American intervened with Advice on the Assignats by a Citizen of the United States. He warned the Assembly against the assignats out of the rich recent experience of his own country with the Continental notes. However, the initial response to the land-based currency was generally favourable.

Had it been possible to stop with the original issue or with that of 1790, the assignats would be celebrated as a remarkably interesting innovation. Here was not a gold, silver or tobacco standard but one based solidly and logically on the good soil of France.

Purchasing power in the first years had stood up well. There was admiring talk of how the assignats had put land into circulation. And business had improved, employment had increased and sales of the Church and other public lands had been facilitated. On occasion, sales had been too good. In relation to annual income, the prices set were comparatively modest; speculators clutching large packages of the assignats had arrived to take advantage of the bargains.

However, in France, as earlier in America, the demands of revolution were insistent. Although the land was limited, the claims upon it could be increased.

The large issue of 1790 was followed by others – especially after war broke out in 1792. Prices denominated in assignats now rose; their rate of exchange for gold and silver, dealing in which had been authorized by the Assembly, declined sharply. In 1793 and 1794, under the Convention and the management of Cambon, there was a period of stability. Prices were fixed with some success. What could have been more important, the supply of assignats was curtailed by the righteous device of repudiating those that had been issued under the king. In those years they retained a value of around 50 per cent of their face amount when exchanged for gold and silver.

Soon, however, need again asserted itself. More and more were printed. In an innovative step in economic warfare, Pitt, after 1793, allowed the royalist emigres to manufacture assignats for export to France. This, it was hoped, would hasten the decay.

In the end, the French presses were printing one day to supply the needs of the next. Soon the Directory halted the exchange of good real estate for the now nearly worthless paper – France went off the land standard. Creditors were also protected from having their debts paid in assignats. This saved them from the ignominy of having (as earlier in America) to hide out from their debtors. (Galbraith, pp. 64-66)

The lands of aristocrats who had fled France was confiscated as well and used to back further issuances of paper currency. Despite this, as with the Continentals, the value of the assignats soon inflated away to very little. France then issued a new paper money, the mandats territoriaux, also carrying an entitlement to land, in an attempt to stabilize the currency. But distrust in the paper currency (and in the government) was so endemic that the mandats began to depreciate even before they were issued:

With the sale of the confiscated property, a great debtor class emerged, which was interested in further depreciation to make it cheaper to pay back debts. Faith in the new currency faded by mid-year 1792. Wealth was hidden abroad and specie flowed to surrounding countries with the British Royal Mint heavily purchasing gold, particularly in 1793 and 1794.

But deficits persisted and the French government still needed to raise money, so in 1792, it seized the land of emigrants and those who had fled France, adding another 2 billion livres or more to French assets. War with Belgium that year was largely self-funded as France extracted some rents, but not so for the war with England in 1793. Assignats no longer circulated as a medium of payment, but were an object of speculation. Specie was scarce, but sufficient, and farmers refused to accept assignats, which were practically demonetized. In February 1793, citizens of Paris looted shops for bread they could no longer afford, if they could find it at all.

In order to maintain its circulation, France turned to stiff penalties and the Reign of Terror extended into monetary affairs. During the course of 1793, the Assembly prohibited buying gold or silver at a premium, imposed a forced loan on a portion of the population, made it an offense to sell coin or differentiate the price between assignats and coin, and under the Law of the Maximum fixed prices on some commodities and mandated that produce be sold, with the death penalty imposed for infractions.

France realized that to restore order, the volume of paper money in circulation must decrease. In December 1794, it repealed the Law of the Maximum. In January 1795, the government permitted the export of specie in exchange for imports of staple goods. Prices fluctuated wildly and the resulting hyperinflation became a windfall for those who purchased national land with little money down. Inflation peaked in October 1795. In February 1796, in front of a large crowd, the assignat printing plates were destroyed.

By 1796, assignats gave way to specie and by February 1796, the experiment ended. The French tried to replace the assignat with the mandat, which was backed by gold, but so deep was the mistrust of paper money that the mandat began to depreciate before it was even issued and lasted only until February 1797…

Crisis Chronicles: The Collapse of the French Assignat and Its Link to Virtual Currencies Today (Liberty Street)

…In February 1797 (16 Pluvoise year V), the Directory returned to gold and silver. But by then the Revolution was an accomplished fact. It had been financed, and this the assignats had accomplished. They have at least as good a claim on memory as the guillotine. (Galbraith, p. 66)

FRA-A73-République Française-400 livres (1792) 2.jpgEventually, France’s money system stabilized once its political situation more-or-less stabilized, but entire books have been written about that subject. The military dictatorship of Napoleon Bonaparte sold off the French lands in North America to the United States to raise money for its wars of conquest on the European continent. Napoleon also finally established a central bank in France based on the British model.

In 1800, the lingering suspicion of the French of such institutions had yielded to the financial needs of Napoleon. There had emerged the Banque de France which, in the ensuing century, developed in rough parallel with the Bank of England. In 1875, the former Bank of Prussia became the Reichsbank. Other countries had acquired similar institutions or soon did…(Galbraith, p. 41)

It might to be going too far to say that without paper money, neither the American or French revolutions would have ever happened. But nor is entirely absurd to say that this may well be the case. It’s certainly doubtful that they would have succeeded. It’s difficult to imagine how much history would be different today had it not been for paper money and its role in revolution.

Paper money would continue to play that role throughout the Age of Revolutions well into the Twentieth Century, as Galbraith notes:

Paper was similarly to serve the Soviets in and after the Russian Revolution. By 1920, around 85 per cent of the state budget was being met by the manufacture of paper money…

In the aftermath of the Revolution the Soviet Union, like the other Communist states, became a stern defender of stable prices and hard money. But the Russians, no less than the Americans or the French, owe their revolution to paper.

Not that the use of paper money is a guarantee of revolutionary success. In 1913, in the old Spanish town of Chihuahua City, Pancho Villa was carrying out his engaging combination of banditry and social reform. Soldiers were cleaning the streets, land was being given to the peons, children were being put in schools and Villa was printing paper money by the square yard.

This money could not be exchanged for any better asset. It promised nothing. It was sustained by no residue of prestige or esteem. It was abundant. Its only claim to worth was Pancho Villa’s signature. He gave this money to whosoever seemed to be in need or anyone else who struck his fancy. It did not bring him success, although he did, without question, enjoy a measure of popularity while it lasted. But the United States army pursued him; more orderly men intervened to persuade him to retire to a hacienda in Durango. There, a decade later, when he was suspected by some to be contemplating another foray into banditry, social reform, and monetary policy, he was assassinated. (Galbraith, pp. 66-67)

3. Conclusions

Given that both the Continentals and the assignats both suffered from hyperinflation towards the end, they have been often held up as a cautionary tale: governments are inherently profligate and can not be trusted with money creation; only by strictly pegging paper money issuance to a cache of gold stashed away in vault somewhere can hyperinflation be avoided.

As Galbraith notes, this is highly selective. Sure, if you look just for instances of paper money overissuance and inflation you will find them. But this is also deliberately ignoring instances–often lasting for decades if not for centuries–that paper money functioned exactly as intended all across the globe; from ancient China, to colonial America, to modern times. It emphasizes the inflationary scare stories, but intentionally ignores the very real stimulus to commercial activity that paper money has provided, as opposed to the extreme constraints of a precious metal standard. It also totally ignores any extenuating circumstances in hyperinflations, such as Germany’s repayment of war debt in the twentieth century, or persistent economic warfare in the case of Venezuela today.

So the attitude that “government simply can’t be trusted” is more of a political opinion than something based on historical facts.

…in the minds of some conservatives…there must have been a lingering sense of the singular service that paper money had, in the recent past, rendered to revolution. Not only was the American Revolution so financed. So also was the socially far more therapeutic eruption in France. If the French citizens had been required to act within the canons of conventional finance, they could not, any more than the Americans, acted at all. (Galbraith, pp. 61-62)

The desire for a gold standard comes from a desire to anchor the value of money in something outside of the control of governments. But, of course, pegging the value of currency to a certain arbitrary amount of gold is a political choice. Nor does it guarantee price stability–the value of gold fluctuates. A gold standard is more of a guarantee of the stability of the price of gold than the stability of the value of money. Also, in almost every case of war and economic depression in modern history, the gold standard is immediately chucked into the trashbin.

The other thing worth noting is that the worth of paper money is related to both issues of supply AND demand. Often, it’s not just that there is too much supply of currency. It’s that people refuse to accept the currency, leading just as assuredly to a loss in value.

And the lack of acceptance is usually driven by a lack of faith in the issuing government. You can see why this might be the case for assignats and Continentals. Both were revolutionary governments whose very stability and legitimacy was in question, particularly in France. If the government issuing the currency (which are IOU’s, remember) may not be around a year from now, then how willing are you to accept that currency? James Madison pointed out that the value of any currency was mostly determined by faith in the credit of the government issuing the currency. That’s why he and other Founding Fathers worked so hard to reestablish the credit of the United States following the Continental note debacle.

As Rebecca Spang—the author of “Stuff and Money in the Time of the French Revolution”—notes, many people in revolutionary France were vigorously opposed to the seizing of Church property. Thus, they would not accept the validity of notes based on their value. This led to a lack of acceptance which contributed just as much to hypernflation as did any profligacy on the part of the government:

Revolutionary France became a paradigm case for the quantity theory of money, the view that prices are directly and proportionately correlated with the amount of money in circulation, and for the deleterious consequences of letting the latter run out of control.

Yet Spang shows that such neat economic interpretations are inadequate. At times, for example, prices rose first and politicians boosted the money supply in response.

Spang reiterates that the first assignats were neither a revolutionary policy nor a form of paper money. But as her stylishly crafted narrative makes clear, this soon changed. Politicians made the cardinal error of thinking that the state could be stabilised by in effect destabilising its money.

Popular distrust of the “real” worth of assignats prompted a contagion of fraud, suspicion and uncertainty. How could one tell a fake assignat, when technology couldn’t replicate them precisely? How could they even be used, when there was no compulsion beyond patriotic duty for sellers to accept them as payment? Small wonder that so many artists made trompe l’oeil images out of them — what looked solid and real was anything but…

‘Stuff and Money in the Time of the French Revolution’, by Rebecca Spang (Financial Times)

Note that the situation of a stable government is totally different. Britain’s government was eminently stable compared to the United States and France at that time, hence its money retained most of its value, even when convertibility was temporarily suspended. This also underlies the value of Switzerland’s currency today, since they have a legendarily stable, neutral government (and really not that much in the way of actual  resources).

So those who argue that America’s “fiat” money is no good would somehow have to make the case that the United States government is somehow more illegitimate or more unstable than the governments of other wealthy, industrialized nations. To my mind, this is tangential to treason. Yet no one ever calls them out on this point. From that standpoint, the biggest threat to the money supply comes not from overissuance (hyperinflation is nowhere to be seen), but from undermining the faith in, and credit of, the United States government. That’s been done exclusively by Republicans in recent years by grandstanding over the debt ceiling—an artificial borrowing constraint imposed during the United States’ entry into World War One. Really, this should be considered an unpatriotic and treasonous act. It almost certainly would have been perceived as such by the Founding Fathers.

I always have the same response to libertarians who sneer at the “worthlessness” of government fiat money. My response is this: if you truly believe it is worthless, then I will gladly take it off your hands for you. Please hand over all the paper money you have in your wallet right now at this very moment, as well as all the paper money you may have lying around your house. If you want, you can even take out some “worthless” paper money from the nearest ATM and hand it over to me too; I’ll gladly take that off your hands as well. You can give me as much as you like.

To date, I have yet to have a libertarian take me up on that offer. I wonder why?

Next: The Civil War finally establishes a national paper currency for the U.S.

The Origin of Paper Money 6

1. France

France ended up conducting its own monetary experiment with paper money at around the same time as the American colonies in the early 1700s. Unlike the American experiment, it was not successful. It would be initiated by an immigrant Scotsman fleeing a murder charge (and gambling addict) by the name of John Law. (Jean Lass in French).

At this time—the early 1700’s—France was having much the same conversation around the money supply as in the Anglo-Saxon world. There, the problem was not so much a shortage of  coins, but an excess of sovereign debt due to the wild spending sprees of France’s rulers on foreign wars and luxury lifestyles.

Despite being probably the most wealthy and powerful nation in Western Europe, France’s debts (really, the King’s debts) exceeded its assets by quite a bit, at least on paper. The country struggled to raise enough funds via its antiquated and inefficient feudal tax system to pay the interest on its bonds; France’s debt traded in secondary markets as what we might today call junk bonds (i.e. low odds of repayment).

Louis XIV, having lived too long, had died the year before Law’s arrival. The financial condition of the kingdom was appalling: expenditures were twice receipts, the treasury was chronically empty, the farmers-general of the taxes and their horde of subordinate maltôtiers were competent principally in the service of their own rapacity.

The Duc de Saint-Simon, though not always the most reliable counsel, had recently suggested that the straightforward solution was to declare national bankruptcy – repudiate all debt and start again. Philippe, Duc d’Orleans, the Regent for the seven-year-old Louis XV, was largely incapable of thought or action.

Then came Law. Some years earlier, it is said, he had met Philippe in a gambling den. The latter ‘had been impressed with the Scotsman’s financial genius.’ Under a royal edict of 2 May 1716, law, with his brother, was given the right to establish a bank with capital of 6 million livres, about 250,000 English pounds…
(Galbraith, pp. 21-22)

…The creation of the bank proceeded in clear imitation of the already successful Bank of England. Under special license from the French monarch, it was to be a private bank that would help raise and manage money for the public debt. In keeping with his theories on the benefits of paper money, Law immediately began issuing paper notes representing the supposedly guaranteed holdings of the bank in gold coins.

Law’s…bank that took in gold and silver from the public and lent it back out in the form of paper money. The bank also took deposits in the form of government debt, cleverly allowing people to claim the full value of debts that were trading at heavy discounts: if you had a piece of paper saying the king owed you a thousand livres, you could get only, say, four hundred livres in the open market for it, but Law’s bank would credit you with the full thousand livres in paper money. This meant that the bank’s paper assets far outstripped the actual gold it had in store, making it a precursor of the “fractional-reserve banking” that’s normal today. Law’s bank had, by one estimate, about four times as much paper money in circulation as its gold and silver reserves…

The new paper money had an attractive feature: it was guaranteed to trade for a specific weight of silver, and, unlike coins, could not be melted down or devalued. Before long, the banknotes were trading at more than their value in silver, and Law was made Controller General of Finances, in charge of the entire French economy.

The Invention of Money (The New Yorker)

It’s also worth noting that banknotes were denominated in the unit of account, unlike coins which typically were not. Coins’ value usually fluctuated against the unit of account (what prices were expressed in), sometimes by the day. What a silver sovereign or gold Louis d’Or was worth on one day might be different that the next, especially since the monarchs liked to devalue the currency in order to decrease the amount of their debts. However, if you brought, say, 10 livres, 18 sous worth of coins to Law’s bank, the paper banknote would be written up for the equivalent amount the coins were worth at that time: 10 livres, 18 sous.

By buying back the government’s debt, Law was able to “retire” it. Thus, the money circulating was ultimately backed by government debt (bonds), just like our money today. Law’s promise to redeem the notes for specie gave users the confidence to use them. Later on, the government will decree the notes of the Banque Generale as the “official” money to be used in payment of taxes and settlement of all debts, legitimizing their value by fiat. Law later attempted to sever the link to gold and silver by demonetizing the latter. He was not successful; paper money was far too novel at the time for people to trust its value in the absence of anything tangible backing it.

Not much of what transpired was that unusual for today, but it was pretty radical for the early 1700s. Had Law stopped at this point, it’s likely that all of this would have been successful, as Galbraith points out:

In these first months, there can be no doubt, John Law had done a useful thing. The financial position of the government was eased. The bank notes loaned to the government and paid out by it for its needs, as well as those loaned to private entrepreneurs, raised prices….[and] the rising prices…brought a substantial business revival.

Law opened branches of his bank in Lyons, La Rochelle, Tours, Amiens and Orleans; presently, in the approximate modern language, he went public. His bank became a publicly chartered company, the Banque Royale.

Had Law stopped at this point, he would be remembered for a modest contribution to the history of banking. The capital in hard cash subscribed by the stockholders would have sufficed to satisfy any holders of notes who sought to have them redeemed. Redemption being assured, not many would have sought it.

It is possible that no man, having made such a promising start, could have stopped…
(Galbraith, pp. 22-23)

Trading government debt for paper money helped lower the government’s debts, but on paper, France’s liabilities still exceeded its assets. But it had one asset that had not yet been monetized—millions of acres of land on the North American continent. So Law set out to monetize that land by turning it into shares in a joint-stock company called the Mississippi Company (Compagnie d’Occident). The Mississippi Company had a monopoly on all trading with the Americas. Buying a share in the company meant a cut of the profits (i.e. equity) of trading with North America.

The first loans and the resulting note issue having been visibly beneficial – and also a source of much personal relief – the Regent proposed and additional issue. If something does good, more must do better. Law acquiesced.

Sensing the need, he also devised a way of replenishing the reserves with which the Banque Royale backed up its growing volume of notes. Here he showed that he had not forgotten his original idea of a land bank.

His idea was to create the Mississippi Company to exploit and bring to France the very large gold deposits which Louisiana was thought to have as subsoil. To the metal so obtained were also to be added the gains of trade. Early in 1719, the Mississippi Company (Compagnie d’Occident), later the Company of the Indies, was gives exclusive trading privileges in India, China and the South Seas. Soon thereafter, as further sources of revenue, it received the tobacco monopoly, the right to coin money and the tax farm. (Galbraith, p. 23)

Law—or the Duc d’Arkansas as he was now known—talked up the corporation so well that the value of the shares skyrocketed—probably the world’s very first stock bubble (but hardly the last). Gambling fever was widespread and contagious, as the desire to get rich by doing nothing is a human universal. The term “millionaire” was coined. Law took advantage of the inflated share price to buy back more of the government’s debt. And the money to buy the shares at the inflated prices was printed by the bank itself. Knowing that there was far more paper than gold and silver to back it in the kingdom, Law then tried to break the link between paper money and specie by demonetizing gold and silver; at one point making it illegal to even hold precious metals.

He was unsuccessful. Paper money was still too new, and people were unwilling to trust it without the backing of previous metal, causing a loss of faith in the currency. Later suspensions of convertibility were done after generations of paper money use. Law’s entire scheme (from origin to collapse) took place over the course of less than a year.

[Law] funded the [Mississippi] company the same way he had funded the bank, with deposits from the public swapped for shares. He then used the value of those shares, which rocketed from five hundred livres to ten thousand livres, to buy up the debts of the French King. The French economy, based on all those rents and annuities and wages, was swept away and replaced by what Law called his “new System of Finance.”

The use of gold and silver was banned. Paper money was now “fiat” currency, underpinned by the authority of the bank and nothing else. At its peak, the company was priced at twice the entire productive capacity of France…that is the highest valuation any company has ever achieved anywhere in the world.

Galbraith and Weatherford summarize the shell game that Law’s “system” ended up becoming:

To simplify slightly, Law was lending notes of the Banque Royale to the government (or to private borrowers) which then passed them on to people in payment of government debts or expenses. These notes were then used by the recipients to buy stock in the Mississippi Company, the proceeds from which went to the government to to pay expenses and pay off creditors who then used the notes to buy more stock, the proceeds from which were used to meet more government expenditures and pay off more public creditors. And so it continued, each cycle being larger than the one before. (Galbraith, p. 24)

The Banque Royale printed paper money, which investors could borrow in order to buy stock in the Mississippi company; the company then used the new notes to pay out its bogus profits. Together the Mississippi Company and the Banque Royale were producing paper profits on each other’s accounts. They bank had soon issued twice as much paper money as there was specie in the whole country; obviously it could no longer guarantee that each paper note would be redeemed in gold. (Weatherford, p. 131)

Such a scheme couldn’t last, of course. Essentially the entire French economy—its central bank, its money supply, its tax system, and the monopoly on land in North America—were in the hands of one single, giant conglomerate run by one man. That meant that when one part of the system failed, all the rest went down like ascending mountain climbers roped together.

Because the central bank owned the Mississippi company, it had an incentive to loan out excess money to drive the share price up—in other words, to inflate a stock bubble based on credit. This is always a bad idea. Finally, Law’s exaggeration of the returns on investments in the Mississippi Company inflated expectations far beyond what was realistic.

The popping of the Mississippi stock bubble, followed by a run on the bank, was enough to bring the whole thing crashing down.

People started to wonder whether these suddenly lucrative investments were worth what they were supposed to be worth; then they started to worry, then to panic, then to demand their money back, then to riot when they couldn’t get it.

Gold and silver were reinstated as money, the company was dissolved, and Law was fired, after a hundred and forty-five days in office. In 1720, he fled the country, ruined. He moved from Brussels to Copenhagen to Venice to London and back to Venice, where he died, broke, in 1729.

The Invention of Money (The New Yorker)

As Law must have known, if you gamble big, sometimes you lose big.

Some of the death of the Bank was murder, not suicide. As part of his System, one of Law’s initiatives was to simplify and modernize the inefficient and antiquated French tax system. Taxes were collected by tax farmers (much as in ancient Rome), and Law threatened to overturn their apple cart. He also attempted to end the sale of government offices to the highest bidder. This made him a lot of enemies among the moneyed classes, who thrived on graft and corruption. Such influential people (notably the financiers the Paris brothers), were instrumental in the run on the bank and the subsequent loss of confidence in the money system:

[Law] set about streamlining a tax system riddled with corruption and unnecessary complexity. As one English visitor to France in the late seventeenth century observed. “The people being generally so oppressed with taxes, which increase every day, their estates are worth very little more than what they pay to the King; so that they are, as it were, tenants to the Crown, and at such a rack rent that they find great difficulty to get their own bread.” The mass of offices sold to raise money had caused one of Louis XIV’s ministers to comment, “When it pleases Your Majesty to create an office, God creates a fool to purchase it.” There were officials for inspecting the measuring of cloth and candles; hay trussers; examiners of meat, fish and fowl. There was even an inspector of pigs’ tongues.

This did nothing for efficiency, Law deemed, and served only to make necessities more expensive and to encourage the holders of the offices “to live in idleness and deprive the state of the service they might have done it in some useful profession, had they been obliged to work.” In place of the hundreds of old levies he swept away (over forty in one edict alone), Law introduced a new national taxation system called the denier royal, based on income. The move caused an outcry among the holders of offices, many of whom were wealthy financiers and members of the Parliament, but delight among the public. “The people went dancing and jumping about the streets,” wrote Defoe. “They now pay not one farthing tax for wood, coal, hay, oats, oil, wine, beer, bread, cards, soap, cattle, fish.” (Janet Gleeson, Millionaire; pp. 155-156

Michel Aglietta, in his magisterial work on money, notes that Law…

…wanted to introduce the logic of capitalism in France, based on providing credit through money creation. Money creation had to be based on expected future wealth, and no longer on the past wealth accumulated in precious metals. (Aglietta, p. 206, emphasis in original)

The danger is, if this wealth fails to materialize; or if people lose the belief that it will materialize, confidence in the system is lost, and failure soon follows.

Although John Law has come down in history as a grifter, and his ideas as fundamentally unsound, many of his ideas eventually became fundamental tenets of modern global finance:

The great irony of Law’s life is that his ideas were, from the modern perspective, largely correct. The ships that went abroad on behalf of his great company began to turn a profit. The auditor who went through the company’s books concluded that it was entirely solvent—which isn’t surprising, when you consider that the lands it owned in America now produce trillions of dollars in economic value.

Today, we live in a version of John Law’s system. Every state in the developed world has a central bank that issues paper money, manipulates the supply of credit in the interest of commerce, uses fractional-reserve banking, and features joint-stock companies that pay dividends. All of these were brought to France, pretty much simultaneously, by John Law.

The Invention of Money (The New Yorker)

Law’s efforts left a lingering suspicion of paper money in France. Unfortunately, the revenues problem was not definitively solved. Going back on a specie standard delivered a huge blow to commerce. While England’s paper money system flourished, France stagnated economically. Eventually, the revenues situation of the government became so dire that the King had no choice but to call an Estates General—the extremely rare parliamentary session that kicked off the French Revolution—in 1789.

Once the Mississippi bubble burst, a lot of the capital in France needed some new outlet to invest in. Much of that capital fled across the channel to England, which at the time was inflating a stock bubble of its own:

France’s ruin was England’s gain. Numerous bruised Mississippi shareholders chose to reinvest in English South Sea shares.
The previous month, with a weather eye to developments in France, the South Sea Company managed to beat its rival the Bank of England and secure a second lucrative deal with the government whereby it took over a further $48 million of national debt and launched a new issue of shares. A multitude of English and foreign investors were now descending on London as they had flocked less than a year earlier to Paris “with as much as they can carry and subscribing for or buying shares.”

In Exchange Alley–London’s rue Quincampoix–the sudden surce of new money also bubbled a plethora of alternative companies launched to capitalize on the new fashion for financial fluttering… (Gleeson, p, 200)

2. England

Britain chose a different tack – sovereign debt would be monetized and circulate as money. It too utilized the joint-stock company model that had been invented in the previous centuries to enable the Europeans to raise the funds to exploit and colonize the rest of the world. A bank was founded as a chartered company to take in money through subscribed shares and loan out that money to the King. That debt—and not land—would securitize the notes issued by the bank. The notes would then circulate as money, albeit alongside precious metal coins and several other forms of payment. As with the original invention of sovereign debt in northern Italy, it was used to raise the necessary funds for war:

The modern system for dealing with [the] problem [of funding wars] arose in England during the reign of King William, the Protestant Dutch royal who had been imported to the throne of England in 1689, to replace the unacceptably Catholic King James II.

William was a competent ruler, but he had serious baggage—a long-running dispute with King Louis XIV of France. Before long, England and France were involved in a new phase of this dispute, which now seems part of a centuries-long conflict between the two countries, but at the time was variously called the Nine-Years’ War or King William’s War. This war presented the usual problem: how could the nations afford it?

King William’s administration came up with a novel answer: borrow a huge sum of money, and use taxes to pay back the interest over time. In 1694, the English government borrowed 1.2 million pounds at a rate of eight per cent, paid for by taxes on ships’ cargoes, beer, and spirits. In return, the lenders were allowed to incorporate themselves as a new company, the Bank of England. The bank had the right to take in deposits of gold from the public and—a second big innovation—to print “Bank notes” as receipts for the deposits. These new deposits were then lent to the King. The banknotes, being guaranteed by the deposits, were as good as gold money, and rapidly became a generally accepted new currency.

The Invention of Money (The New Yorker)

From this point forward, money would be circulating government debt. Plus, it’s value would be based on future revenues, as Aglietta noted above, and not just on the amount of gold and silver coins floating around.

The originality of the Bank of England was that it was not a deposit bank. Unlike for the Bank of Amsterdam, the coverage for the notes issued was very low (3 percent in the beginning). These notes, the counterparty to its loans to the state, replaced bills of exchange and became national and international means of payment for the bank’s customers.

They were not legal tender until 1833. But the securities issued by the bank, bringing interest on the public debt, became legal tender for all payments to the government from 1697 onwards. (Aglietta, pp. 136-137)

Why did the King of England have to borrow at all? Well, for a couple reasons. The power to raise taxes had been taken away from the King and given to Parliament as a consequence of the English Revolution. That revolutionary era also witnessed the inauguration goldsmith banking (such as that undertaken by John Law’s own family of goldsmiths). These goldsmith receipts were the forerunners of the banknote:

The English Civil War…broke out because parliament disputed the king’s right to levy taxes without its consent. The use of goldsmith’s safes as secure places for people’s jewels, bullion and coins increased after the seizure of the mint by Charles I in 1640 and increased again with the outbreak of the Civil War. Consequently some goldsmiths became bankers and development of this aspect of their business continued after the Civil War was over.

Within a few years of the victory by the parliamentary forces, written instructions to goldsmiths to pay money to another customer had developed into the cheque (or check in American spelling). Goldsmiths’ receipts were used not only for withdrawing deposits but also as evidence of ability to pay and by about 1660 these had developed into the banknote.

Warfare and Financial History (Glyn Davies, History of Money online)

By this time, control over money had passed into the hands of a rising mercantile class, who—thanks to the staggering wealth produced by globalized trade—possessed more wealth than mere princes and kings, but lacked the ability to write laws or to print money, which they strongly coveted. It was these merchants and “moneyed men” (often members of the Whig party in Parliament) who backed the Dutch staadtholder William of Orange’s claim to the English throne in 1688.

The banknotes began to circulate widely, displacing coins and bills of exchange. And it didn’t stop there: more money was quickly needed, and the Bank acquired more influence. Part of this was due to England being a naval—rather than an army—power. Warships require huge expenditures of capital to build. They also require a vast panoply of resources, such as wood, nails, iron, cloth, stocked provisions, and so forth; whereas land-based armies just require paying soldiers and provisions (which can be commandeered). Thus, financial means to mobilize these resources were much more likely in naval powers such as Holland and England than in continental powers like France, Austria and Spain.

This important post from the WEA Pedagogy blog uses excerpts from Ellen Brown’s Web of Debt to lay out the creation of the Bank of England, and, consequently, central banking in general (and is well-worth reading in full):

William was soon at war with Louis XIV of France. To finance his war, he borrowed 1.2 million pounds in gold from a group of moneylenders, whose names were to be kept secret. The money was raised by a novel device that is still used by governments today: the lenders would issue a permanent loan on which interest would be paid but the principal portion of the loan would not be repaid.

The loan also came with other strings attached. They included:

– The lenders were to be granted a charter to establish a Bank of England, which would issue banknotes that would circulate as the national paper currency.

– The Bank would create banknotes out of nothing, with only a fraction of them backed by coin. Banknotes created and lent to the government would be backed mainly by government I.O.U.s, which would serve as the “reserves” for creating additional loans to private parties.

– Interest of 8 percent would be paid by the government on its loans, marking the birth of the national debt.

The lenders would be allowed to secure payment on the national debt by direct taxation of the people. Taxes were immediately imposed on a whole range of goods to pay the interest owed to the Bank.

The Bank of England has been called “the Mother of Central Banks.” It was chartered in 1694 to William Paterson, a Scotsman who had previously lived in Amsterdam. A circular distributed to attract subscribers to the Bank’s initial stock offering said, “The Bank hath benefit of interest on all moneys which it, the Bank, creates out of nothing.” The negotiation of additional loans caused England’s national debt to go from 1.2 million pounds in 1694 to 16 million pounds in 1698. By 1815, the debt was up to 885 million pounds, largely due to the compounding of interest. The lenders not only reaped huge profits, but the indebtedness gave them substantial political leverage.

The Bank’s charter gave the force of law to the “fractional reserve” banking scheme that put control of the country’s money in a privately owned company. The Bank of England had the legal right to create paper money out of nothing and lend it to the government at interest. It did this by trading its own paper notes for paper bonds representing the government’s promise to pay principal and interest back to the Bank — the same device used by the U.S. Federal Reserve and other central banks today.

Note that the interest on the loan is paid, but never the loan itself. That meant that tax revenues were increasingly funneled to a small creditor class to whom the government was indebted. Today, we call such people bond holders, and they exercise their leverage over governments through the bond markets. For all intents and purposes, this system ended government sovereignty and tied the hands of even elected governments being able to spend tax money on the domestic needs of their own people. Control over the state’s money was lost forever.

An interesting couple of notes: William Paterson was, like John Law, a Scotsman—giving credence to the claim that it was the Scots who “invented Capitalism” (Adam Smith and James Watt were also Scots). It also raises the idea (to me, anyway) that the modern financial system was started by instinctive hustlers and gamblers. We’ve already referred to John Law’s expertise at the gambling tables of Europe and ability to inspire confidence in his schemes. Patterson, upon returning to Scotland, began raising funds via stock for an ambitious scheme to develop a society in Central America. This scheme ended up being on of the worst disasters in history. Not only that, but the Darien scheme collapsed so badly that Scotland’s entire financial health was devastated, and is considered to be a factor in Scotland signing the Acts of Union, politically joining with England to the south.

For an overview of the Darien scheme, see this: Scotland’s lessons from Darien debacle (BBC)

The WEA Pedagogy blog than adds some additional details:

Some more detail of interest is that the creation of Bank of England was tremendously beneficial for England. The King, no longer constrained, was able to build up his navy to counter the French. The massive (deficit) spending required for this purpose led to substantial progress in industrialization.

Quoting Wikipedia on this: “As a side effect, the huge industrial effort needed, including establishing ironworks to make more nails and advances in agriculture feeding the quadrupled strength of the navy, started to transform the economy. This helped the new Kingdom of Great Britain – England and Scotland were formally united in 1707 – to become powerful. The power of the navy made Britain the dominant world power in the late 18th and early 19th centuries”

The post then summarizes the history of the creation of central banking:

…It is in this spirit that we offer a “finance drives history” view of the creation of the first Central Bank. The history above can be encapsulated as follows:

1. Queen Elizabeth asserted and acquired the sovereign right to issue money.
2. The moneylenders (the mysterious 0.1% of that time) financed and funded a revolution against the king, acquiring many privileges in the process.
3. Then they financed and funded the restoration of the aristocracy, acquiring even more privileges in the process.
4. Finally, when the King was in desperate straits to raise money, they offered to lend him money at 8% interest, in return for creating the Bank of England, acquiring permanently the privilege of printing money on behalf of the king.

The process by which money was created by the Bank of England is extremely interesting. They acquired the debt of the King. This debt was used as collateral/backing for the money they created. The notes they issued were legal tender in England. Whenever necessary, they were prepared to exchange them for gold, at the prescribed rates. However, when the confidence of the public is high, the need for actual gold as backing is substantially reduced.

Origins of Central Banking (WEA Pedagogy Blog)

As I noted above, the importance of the Navy in the subsequent industrialization of England is often overlooked. There have been a few scholars who have argued that it was Britain’s emphasis on naval power which was a factor in England (and not somewhere else) becoming the epicenter of the Industrial Revolution. Many of its key inventions were sponsored by the government in order to more effectively fight and navigate at sea (from accurate clocks and charts to canned food). Even early mass production was prompted by the needs of the British Navy: pulley blocks were mass-produced by engineers and were one of the first items made this way via mechanization.

Just like in other countries, the needs of war caused the Bank to issue more and more notes, greatly increasing to the national debt. However, the vast profits of industrialization and colonialism were enough to support it. When convertibility was finally temporarily suspended in the mid 1800s by necessity, paper money continued to carry the trust of the public, unlike in France. Galbraith sums up the subsequent history of the Bank of England:

In the fifteen years following the granting of the original charter the government continued in need, and more capital was subscribed by the Bank. In return, it was accorded a monopoly of joint-stock, i.e., corporate, banking under the Crown, one that lasted for nearly a century. In the beginning, the Bank saw itself merely as another, though privileged, banker.

Similarly engaged in a less privileged way were the goldsmiths, who by then had emerged as receivers of deposits and sources of loans and whose operations depended rather more on the strength of their strong boxes than on the rectitude of their transactions. They strongly opposed the renewal of the Bank’s charter. Their objections were overcome, and the charter was renewed.

Soon, however, a new rival appeared to challenge the Bank’s position as banker for the government. This was the South Sea Company. In 1720, after some years of more routine existence, it came forward with a proposal for taking over the government debt in return for various concessions, including, it was hoped, trading privileges to the Spanish colonies, which, though it was little noticed at the time, required a highly improbable treaty with Spain.

The Bank of England bid strenuously against the South Sea Company for the public debt but was completely outdone by the latter’s generosity, as well as by the facilitating bribery by the South Sea Company of Members of Parliament and the government. The rivalry between the two companies did not keep the Bank from being a generous source of loans for the South Sea venture. All in all, it was a narrow escape.

For the enthusiasm following the success of the South Sea Company was extreme. In the same year that Law’s operations were coming to their climax across the Channel, a wild speculation developed in South Sea stock, along with that in numerous other company promotions, including one for a wheel for perpetual motion, one for ‘repairing and rebuilding parsonage and vicarage houses’ and the immortal company ‘for carrying on an undertaking of great advantage, but nobody to know what it is’. All eventually passed into nothing or something very near.
In consequence of its largely accidental escape, the reputation of the Bank for prudence was greatly enhanced.

As Frenchmen were left suspicious of banks, Englishmen were left suspicious of joint-stock companies. The Bubble Acts (named for the South Sea bubble) were enacted and for a century or more kept such enterprises under the closest interdict.

From 1720 to 1780, the Bank of England gradually emerged as the guardian of the money supply as well as of the financial concerns of the government of England. Bank of England notes were readily and promptly redeemed in hard coin and, in consequence, were not presented for redemption. The notes of its smaller competitors inspired no such confidence and were regularly cashed in or, on occasion, orphaned.
By around 1770, the Bank of England had become nearly the sole source of paper money in London, although the note issues of country banks lasted well into the following century. The private banks became. instead, places of deposit. When they made loans, it was deposits, not note circulation, that expanded, and, as a convenient detail, cheques now came into use. (Galbraith, 32-34)

By a complete accident, Britain was able to escape France’s fate. When the South Sea bubble popped, the Bank of England was able to reliably take up the slack and manage the government’s debt—an option that France did not have, since the central bank and the Company were all part of the same organization, and that organization had a monopoly over loans to the government, tax collection, and money creation.

Next time: An Instrument of Revolution.

The Origin of Paper Money 5

As noted last time, the issuance of printed money by Pennsylvania was highly successful. It increased trade and greatly expanded the economy.

One person who noticed this was a young printer by the name of Benjamin Franklin. At the age of only 23, he wrote a treatise strongly advocating the benefits of printing paper money to increase the domestic money supply.

Franklin arrived in Philadelphia the year paper money was first issued by Pennsylvania (1723), and he soon became a keen observer of and commentator on colonial money…Franklin noted that after the legislature issued this paper money, internal trade, employment, new construction, and the number of inhabitants in the province all in-creased. This feet-on-the-ground observation, this scientific empiricism in Franklin’s nature, would have a profound effect on Franklin’s views on money throughout his life. He will repeat this youthful observation many times in his future writings on money.

Benjamin Franklin and the Birth of a Paper Money Economy

Franklin had noted the effects that the chronic shortage of precious metal coins had on the local economy. Something needed to be done, he thought. Franklin, of course, being a printer by trade, felt that his printing presses might be the solution to this problem.

Franklin’s proposal–and this was key–was that paper money could not be backed by silver and gold; because the lack of silver and gold was what the paper money was designed to rectify in the first place!

Franklin also noted a point that critics of the gold standard have made ever since: the value of gold and silver is not stable, but fluctuates over time with supply and demand, just like everything else! Backing one’s currency by specie was no guarantee of stable prices or a stable money supply. As was seen in Europe, a sudden influx could send prices soaring, and a dearth would send prices crashing. As we’ll see, this was a major problem with precious metal standards throughout the nineteenth century—a point conspicuously ignored by goldbugs. Instead, he proposed a land bank, which, as we saw earlier, was a very popular idea at this time. Even though the colonies didn’t have sources of previous metals—and couldn’t mint them even if they did—they did have an abundant supply of real estate, far more than Europe, in fact. Land could be mortgaged, and the mortgages would act as backing for the new government-issued currency.

Economist (and Harry Potter character) Farley Grubb has written a definitive account of Franklin’s proposal:

Franklin begins his pamphlet by noting that a lack of money to transact trade within the province carries a heavy cost because the alternative to paper money is not gold and silver coins, which through trade have all been shipped off to England, but barter. Barter, in turn, increases the cost of local exchange and so lowers wages, employment, and immigration. Money scarcity also causes high local interest rates, which reduces investment and slows development. Paper money will solve these problems.

But what gives paper money its value? Here Franklin is clear throughout his career: It is not legal tender laws or fixed exchange rates between paper money and gold and silver coins but the quantity of paper money relative to the volume of internal trade within the colony that governs the value of paper money. An excess of paper money relative to the volume of internal trade causes it to lose value (depreciate). The early paper monies of New England and South Carolina had depreciated because the quantities were not properly controlled.

So will the quantity of paper money in Pennsylvania be properly controlled relative to the demands of internal trade within the province?

First, Franklin points out that gold and silver are of no permanent value and so paper monies linked to or backed by gold and silver, as with bank paper money in Europe, are of no permanent value. Everyone knew that over the previous 100 years the labor value of gold and silver had fallen because new discoveries had expanded supplies faster than demand. The spot value of gold and silver could fluctuate just like that of any other commodity and could be acutely affected by unexpected trade disruptions. Franklin observes in 1729 that “we [Pennsylvanians] have already parted with our silver and gold” in trade with England, and the difference between the value of paper money and that of silver is due to “the scarcity of the latter.”

Second, Franklin notes that land is a more certain and steady asset with which to back paper money. For a given colony, its supply will not fluctuate with trade as much as gold and silver do, nor will its supply be subject to long-run expansion as New World gold and silver had been. Finally, and most important, land cannot be exported from the province as gold and silver can. He then points out that Pennsylvania’s paper money will be backed by land; that is, it will be issued by the legislature through a loan office, and subjects will pledge their lands as collateral for loans of paper money.

Benjamin Franklin and the Birth of a Paper Money Economy

Franklin argued that the amount of money circulating would be self-correcting. If too little was issued, he said, falling prices would motivate people to mortgage their land to get their hands on more bills. If too much money was circulating, its value would fall, and mortgagees would use the cheaper notes to buy back their land, thus retiring the notes from circulation and alleviating the oversupply.

Finally, Franklin argues that “coined land” or a properly run land bank will automatically stabilize the quantity of paper money issued — never too much and never too little to carry on the province’s internal trade. If there is too little paper money, the barter cost of trade will be high, and people will borrow more money on their landed security to reap the gains of the lowered costs that result when money is used to make transactions. A properly run land bank will never loan more paper money than the landed security available to back it, and so the value of paper money, through this limit on its quantity, will never fall below that of land.

If, by chance, too much paper money were issued relative to what was necessary to carry on internal trade such that the paper money started to lose its value, people would snap up this depreciated paper money to pay off their mortgaged lands in order to clear away the mort-gage lender’s legal claims to the land. So people could potentially sell the land to capture its real value. This process of paying paper money back into the government would reduce the quantity of paper money in circulation and so return paper money ’s value to its former level.

Automatic stabilization or a natural equilibrium of the amount of paper money within the province results from decentralized market competition within this monetary institutional setting. Fluctuations in the demand for money for internal trade are accommodated by a flexible internal money supply directly tuned to that demand. This in turn controls and stabilizes the value of money and the price level within the province.

Benjamin Franklin and the Birth of a Paper Money Economy

Given that the United States was the major pioneer in the Western world for a successful paper fiat currency, it is ironic that we have become one of the centers for resistance to the very idea today. This in large part due to the bottomless funding by billionaire libertarian cranks to promote shaky economic ideas in the United States, such as Austrian Economics, whereas in the rest of the world common-sense prevails. Wild, paranoid conspiracy theories about money (and just about everything else) also circulate widely in the United States, much more widely than the rest of the developed world which has far better educational systems.

Returning to the gold standard is—bizarrely—appropriated by people LARPing the American Revolution today in tri-corner hats, proclaiming themselves as the only true “patriots”. Yet, as we’ve seen, the young United States was the world’s leading innovator in issuing paper money not backed by gold–i.e. fiat currency. And this led to its prosperity. Founding Father Benjamin Franklin was a major advocate of paper money not backed by gold. This is rather inconvenient for libertarians (as is most of actual history).

The young have always learned that Benjamin Franklin was the prophet of thrift and the exponent of scientific experiment. They have but rarely been told that he was the advocate of the use of the printing press for anything except the diffusion of knowledge. (Galbraith, p. 55)

That’s right, Ben Franklin was an advocate of “printing money.” Something to remember the next time a Libertarian glibly sneers at the concept. Later advocates of “hard money”, i.e. goldbugs like Andrew Jackson, would send the U.S. economy crashing to its knees in the early nineteenth century by returning to a gold standard.

Here’s Galbraith describing the theory behind paper money:

There is very little in economics that invokes the supernatural. But by one phenomenon many have been tempted. In looking at a rectangular piece of paper, on frequent occasion of indifferent quality, featuring a national hero or monument or carrying a classical design with overtones of Peter Paul Rubens, Jacques Louis David or a particularly well-stocked vegetable market and printed in green or brown ink, they have been assailed by the question: Why is anything intrinsically so valueless so obviously desirable? What, in contrast to a similar mass of fibres clipped from yesterday’s newspaper, gives it the power to command goods, enlist service, induce cupidity, promote avarice, invite to crime? Surely some magic is involved; certainly some metaphysical or extraterrestrial explanation of its value is required. The priestly reputation and tendency of people who make a profession of knowing about money have been noted. Partly it is because such people are thought to know why valueless paper has value.

The explanation is wholly secular; nor is magic involved.

Writers on money have regularly distinguished between three types of currency:

(1) that which owes its value, as do gold and silver, to an inherent desirability derived from well-established service to pride of possession, prestige of ownership, personal adornment, dinner service or dentistry;

(2) that which can be readily exchanged for something of such inherent desirability or which carries the promise, like the early Massachusetts Bay notes, of eventual exchange; and

(3) currency which is intrinsically worthless, carries no promise that it will be redeemed in anything useful or desirable and which is sustained, at most, by the fiat of the state that it be accepted.

In fact, all three versions are variations on a single theme.

John Stuart Mill…made the value of paper money dependent on its supply in relation to the supply of things available for purchase.
Were the money gold or silver, there was little chance, the plethora of San Luis Potosí or Sutter’s Mill apart, for the amount to increase unduly. This inherent limit on supply was the security that, as money, it would be limited in amount and so retain its value.

And the same assurance of limited supply held for paper money that was fully convertible into gold and silver. As it held for paper that could not be converted into anything for so long as the supply of such paper was limited. It was the fact of scarcity, not the fact of intrinsic worthlessness, that was important. The problem of paper was that, in the absence of convertibility, there was nothing to restrict its supply. Thus it was vulnerable to the unlimited increase that would diminish or destroy its value.

The worthlessness of paper is a detail. Rock quarried at random from the earth’s surface and divided into units of a pound and upward would not serve very happily as currency. So great would be the potential supply that the weight of rock for even a minor transaction would be a burden. But rock quarried on the moon and moved to the earth, divided and with the chunks duly certified as to the weight and source, though geologically indistinguishable from the earthbound substance, would be a distinct possibility, at least for so long as the trips were few and the moon rock retained the requisite scarcity. pp. 62-64

NEXT: England and France get on the paper money train. England succeeds; France fails.