Sabotage in the Industrial System

The Technocracy Movement was a bid to replace politicians with scientifically-trained engineers, and to replace production for profit by businessmen with production for people’s needs by engineers.

Many of the ideas of technocracy were inspired by a now almost forgotten American economist by the name of Thorstein Veblen. Veblen’s ideas have been boiled down to just one single idea: that of conspicuous consumption, while all of is other ideas have been forgotten. Or a cynic might say, suppressed, because he was one of the last of the economists who were interested in observing how the economy around them actually worked, rather than simply work out sophisticated mathematical descriptions of markets that existed nowhere in reality.

Veblen made a sharp distinction between business and industry. Simply put, business is the process of making money, industry is the process of making things. If the things are made expressly to be sold, then they are called commodities.

Veblen argued that the goals of business and industry were inherently at odds with each other.

Let’s take masks and ventilators as an obvious example. The technical process of making masks and ventilators is the industry. From an industrial standpoint, you would want to make as many masks and ventilators as you are technically capable of producing. That depends on a number of factors: how efficient your factory is, how many raw materials and supplies you can procure, whether you have adequate energy and employees, whether you have sufficient technical know-how, and so on.

Veblen described this as the engineering challenge, which was solved by various types of engineers.

The express goal of the engineers, then, was to make the process of making masks and ventilators as efficient and effective as possible. To do this they would look at the process and do everything in their power to allow the factories produce as many masks and ventilators as possible from a technical standpoint. To do this, they might design a more efficient manufacturing machine, streamline the production process, redesign the masks and ventilators with fewer parts and pieces, automate as many repetitive steps as possible, and so on.

In a time like that of COVID-19, the need for masks and ventilators is very great. You would want factories running at all-out capacity to make as many of these things as they possibly can, to the point where there are so many that we can never run out. In times like these, you might even want a ventilator for every person in the entire country, and several masks per person.

In normal times, however, business—as opposed to industry—decidedly does not want to make as many masks and ventilators as it possibly can. Why not? The answer is simple.

Business is the art of making profits, not commodities. In order to make a profit, you have to charge an adequate price for the thing you are selling. And if something becomes too common, it’s price goes down. That is, if you make too much of something, its price declines, because it is no longer scarce. And the more you make, the lower the price goes. Lower prices mean less profits.

Thus, in order to keep the price level high enough, you need to make sure that there is not too much of what you’re trying to sell.

From that standpoint then, you would decidely NOT want the factories pumping out as many masks and ventilators as possible, because then the market would be flooded with those things and the price would go down. If the price goes down, you make less profits.

And so, from a business standpoint, then, you want to produce only so much of what you are selling as to keep the price level at an adequate and stable level so that you can make appropriate profits.

Veblen classified this group as the businessmen, as opposed to the engineers. The businessmen and the engineers, then, are at cross-purposes. The driving force of the engineers is to make the process of producing masks and ventilators as efficient and streamlined as possible, while the driving goal of the businessmen is to only make enough to keep the profits high. That means the businessman’s profits are actually jeopardized if the engineers are too good at their job.

Thus, the businessmen want to hold back maximum production capacity—to make sure that the factories do not go all-out at producing whatever commodity it is they are selling, whether masks, or ventilators, or anything else. To do this, they engage in what Veblen described as sabotage: making the production process less efficient, and/or deliberately producing below capacity. This ensures that the price of the commodity is kept sufficiently high so as to make adequate profits.

Thus, Veblen concluded, in many areas we are operating the means of production far below its potential capacity on a regular basis. That is, because of the price system, businesses were in the process of regularly sabotaging the industrial process.

The price system ensured that would never produce all were capable of producing. This meant that a good portion of the cleverness and inventiveness of the engineers was going to waste, so that businessmen could make profits. It was the businessmen—the people making the money—who were driving the production process, he argues, not the engineers—the people actually responsible making  things.

The logic is so simple that even a child could grasp it.

***

In the example above, I’ve used the example of masks and ventilators because it is so timely and apropos: we need as many as we can possibly get our hands on to responsibly get back to semi-normal (note the word responsibly).

But in actuality, the same rule goes for basically everything in our society that our factories and workshops are capable of producing: we don’t manufacture the amount it is possible to produce; we manufacture the amount it is profitable to produce. Those are not the same. The price system virtually ensures that we will never produce all that we are are capable of producing from a pure technical and engineering standpoint.

In some instances, we may be able of producing enough of something for everyone on our planet, such that no one would have to go without. Under the price system, however, we cannot do so, because if we did that, it would be practically free, and no profits would be made.

The price system, once a logical means of rationing scarce resources, becomes the very thing that causes the resource to be scarce at all!

Put a different way, profits are the cause of scarcity.

***

Here is a good article from the L.A. Review of Books making the same point:

BY NOW, the shortage of medical supplies in the United States is a notorious fact. The nation has between 160,000 and 200,000 ventilators; it may need a million. Masks, gowns, face shields, gloves, bottles of hand sanitizer, and tests for the virus are all in short supply.

The shortage has come as a great surprise, because the government has been contracting with private firms to make these supplies for years, and private firms, as everybody knows, will provide more of any product at a lower price than any central planner ever could. Responding to market signals like greyhounds leaping out of the gates, they race after efficiencies, pushing down costs and boosting productivity.

Yet every day brings fresh evidence of market-based inefficiency. To pick only one example, The New York Times reported on March 29 that a medical supplies company in Costa Mesa, California, which had won a competitive multimillion-dollar contract to make ventilators in 2008, had yet to deliver a single unit. How could a private firm fail so spectacularly to meet the public demand?

A hundred years ago, the economist and satirist Thorstein Veblen was pondering a similar question. In his 1921 book The Engineers and the Price System, he noted that the recent war had demonstrated the tremendous industrial capacity of the advanced nations, yet after the war, unemployment rose and production fell, pushing the industrial world into recession. Machines and men stood idle everywhere, to the great detriment of the public. “[P]eoples are in great need of all sorts of goods and services which these idle plants and idle workmen are fit to produce,” he wrote. “But for reasons of business expediency it is impossible to let these idle plants and idle workmen go to work.”

“Business expediency” meant nothing more than profitability, which Veblen thought was not at all the same thing as productive capacity. In fact, the executive’s job was to reduce the latter in order to ensure the former. “[I]t has become the ordinary duty of the corporate management,” Veblen wrote, “to adjust production to the requirements of the market by restricting the output to what the traffic will bear; that is to say, what will yield the largest net earnings.” Contrary to popular belief, corporate management doesn’t spring forth like a greyhound; it dawdles like a Great Dane.

Veblen had a name for this kind of foot-dragging: sabotage. He pointed out that the word itself derives from the French for “wooden shoe” (sabot), and so it denotes “going slow, with a dragging, clumsy movement, such as that manner of footgear may be expected to bring on.” Because profitability required scaling back production to maximally profitable levels, it followed that economic sabotage “is the beginning of wisdom in all sound workday business enterprise.”

Even if the industrial supply chain is more complicated in our day than it was in Veblen’s, it is still possible to catch the economic saboteurs at work. Returning to the Times story, the original bid-winning company was bought up by another, larger company called Covidien, which begged the federal government for more money, shuffled key employees around the firm (effectively gumming up the gears), and then demanded to be released from the contract. As a result, they received millions of public dollars but provided not a single unit. Veblen would insist that this was not a failure of the free market “price system.” On the contrary, the price system had worked according to its basic laws. As industry observers and government officials explained to the Times, “building a cheaper product […] would undermine Covidiens’ profits from its existing ventilator business.”

Who will save our economy (not to mention countless lives) from these vandals? In order to frighten financiers, “absentee owners” of capital, and other guardians of the status quo, Veblen suggested that they should all be replaced by a “Soviet of technicians.” It was the engineers, he argued, who actually knew how to run the factories…

Who Sabotaged the American Economy? Thorstein Veblen Knows (L.A. Review of Books)

***

Another from of sabotage caused by the price/profit system is planned obsolescence.

This is when manufacturers deliberately slow down the performance of the goods they create; deliberately design them to fall apart or stop working after a certain period of time; or hold back the release of new technology in order to ensure future sales.

Although most economists argue that the free market prevents this from happening, there is evidence that some manufacturers engage do in this activity. A few years ago, Apple admitted to deliberately slowing down their products. That is, Apple sabotaged their own devices!

Apple fined $27.4 million in France for slowing down older iPhones without warning (ZDNet)

There have been other notorious cases; perhaps the most notorious is the Phoebus cartel, which was a consortium of light-bulb makers who all agreed to setting a maximum lifespan for lightbulbs to encourage continual replacement, and thus continued profits.

Apple is not in the business of making computers, it is in the business of making profits, and it just happens to do this by making electronic goods. This is true of every corporation in the world (except for non-profits). What they make is a means to an end, not the end in itself.

Product Longevity and Planned Obsolecence (Conversable Economist)

***

The upshot is that we do not produce all we are capable of producing in almost every instance. This leaves all sorts of shortages that need not be there. In the case of masks and ventilators, this can be fatal. In the case of housing it’s also a serious problem.

Thus, the cornucopia of goods that our society is theoretically capable of producing will ever be denied to us. A post-scarcity society will never be a possibility as long as the price system exists and maximizing business profits is the ultimate goal for producers.

The Code of Capital

An interesting book came out earlier this year by law professor Katerina Pistor called The Code of Capital. The book explains in detail the qualities that an asset has to have in order to generate wealth over time, and how the law can bestow such properties on an ordinary asset to turn it into a capital asset.

In other words, capital is coded via law, and a set of private legal institutions have been set up and used for centuries in order to to code certain things into capital. The first asset coded this way was land, but the code has been extended to more and more things over time, and it continues to be expanded.

I think the book is important because the standard libertarian argument relies on a misunderstanding of things like private property, money and capital as being somehow “natural” and a priori to the state.

But in reality, all of these things are created by human institutions, most particularly the state with its monopoly on coercive power. They are than artificial creations designed to arbitrarily privilege some groups over others.

Pistor’s book focuses on capital, and her conclusion is that the institution which creates capital is the legal system. In the book, she attempts to document exactly how this is done. It also has some interesting historical insights into how capitalism emerged out of feudalism with much of the original power relations more-or-less intact.

Capital is an asset that has some potential to generate private wealth. In order to create a capital asset out of an ordinary asset, Pistor argues that you need to do some legal encoding of that asset. This encoding of an asset—grafting onto an asset a particular set of ideas enshrined in the law—is what she refers to as the “code” of capital. The “code of capital” is what flips a simple object, idea, or promise to pay into a wealth-generating capital asset.

Over time, the code of capital has transformed more and more things into income-generating assets. The code was originally developed with respect to land to preserve the wealth of landlords from challenges to their power. Over time, these legal codes have been used to make capital assets out of things like financial instruments, debt and intellectual property.

Pistor argues that you need to encode three out of four of these concepts to flip an ordinary asset into a capital asset. They are:

1. Priority – Having more senior rights to an asset than other people. This is the fundamental rule you need in order to create a capital asset, or even to have private property.

Legal institutions create ranks and prioritize some claims over others. This matters in insolvency, for example, where claims are ranked from strongest to weakest. The strongest claims feed at the trough first, while the weaker “runt” claims get the leftovers, if anything. How those claims are ranked is determined by the law.

Priority rights are the minimum requirement. But to create a capital generating asset, you need more.

2. Durability – If an asset can end up easily on the auction block, the ability to accumulate wealth over time is limited. Durability protects assets and asset pools from too many counterclaims. It extends priority rights in time.

Durability was fist established using the entail in English common law to protect assets from being seized by creditors. See Fee tail (Wikipedia)

3. Universality – Priority and Durability need to be enforced against everyone, not only against the parties with whom you have directly negotiated these interests—erga omnes in the legal terminology.

Universality gives lawyers the ability to create assets that have priority rights that will be universally enforced not just against the contracting parties, but against anybody, whether or not they knew about the arrangement or were parties to the deal. Universality extends priority rights in space.

4. Convertibility – This allows you to not just be able to transfer an asset, but also to flip the asset into another, safer asset if necessary.

This is most obviously comes into play by turning an object into cash. State-issued currency retains its nominal (though not its actual) value. The ability to do this is especially critical in giving financial assets durability. Without being able to “cash out” paper assets, they might lose much, or even all, of their value. As Pistor describes, convertibility was especially critical in turning things like CDOs and other debt-based securities into capital assets.

When these four legal ideas are grafted onto an asset–any asset–it becomes an income-generating asset, that is capital. Any object, promise of payment or idea can be flipped into a capital asset using these legal tools–this “code”

A lot of these were first developed with respect to land, and then were grafted onto other assets: land, farms, debt, firms, know-how, and even data, with data being the most recent and ongoing.

The legal modules to do this have remained fairly stable over time. She enumerates them as: Property law, Contract law, Corporate law, Collateral law, Trust law and sometimes Bankruptcy law, which can mimic features of all the others.

This is also provides a useful definition of what capital is. Obviously, capital is the beating heart of our economic system. Yet, remarkably, people still argue over what it even is!

Marx entitled his masterwork Captial, and placed it at the center of his analysis. Following him, the overall economic system has been termed “capitalism.” But a solid definition of what does and does not constitute capital has remained elusive. Using the above, we might define it as an asset—physical or otherwise—that possesses three out of four legal properties of priority, durability, universality, and convertibility.

Economists often claim that the central factors of production are land labor and capital. But land and labor can be capital. In fact, anything can be capital if it is coded as such. She points out that one’s own labor can be coded as capital by establishing a corporate entity and issuing dividends to yourself as a corporation shareholder in lieu of a salary.

What this demonstrates that the law is intimately involved in the creation of capital.

“What are the functions that law plays? What you need to convert an asset into a capital asset is a credible commitment of enforceability. You want to make sure that you can enforce your rights at some future date in some place, and maybe even in some place outside your own jurisdiction. You need to have the institutionalization of the centralized means of coercion that private parties can use to organize their private affairs so that they can bank on enforceability. At some level, at every stage in the creation of capital and the creation of financial markets, I would say in the creation of markets in general, the state is deeply involved.”

But what do we mean by law? Law is a particular institutionalization of the central state’s coercive powers. Pistor distinguishes three dimensions in which we have institutionalized law:

1. Top-down vertical ordering – the state enforces order among its citizens through a monopoly on coercive violence.

But the flip side of this vertical dimension is that, in rule-of-law based constitutional systems, citizens can also use the law to protect their interests against the state. This aspect is often ignored. Top-up vertical ordering allows private actors to supersede the state; to “tie its hands” as it were.

The centralization of the means of coercion on the one hand, and the allowing of individuals to avail themselves of the legal system to protect their private property rights against the state through civil and political rights, was an enormous institutional revolution.

2. Horizontal ordering – Private parties can employ the coercive properties of the state to organize their own private affairs. This means that private relations can be structured much more forcefully than they otherwise could be by private parties availing themselves of the state’s legal system.

Pistor traces this legal coding all the way back to the thirteenth and fourteenth centuries in England. For whatever reason, England developed a very powerful private legal profession very early on. Wealthy landowners commonly availed themselves of legal services provided by professional attorneys much more often then their continental counterparts. On the continent the legal profession was less empowered. France controlled them in a top-down fashion, and Prussia halved the private legal profession in the eighteenth century because they were seen, correctly, as a threat to state power.

As Pistor depicts it, many of these laws originated for the benefit of the large, aristocratic landlords to, in essence, to preserve the power relations under feudalism. They enshrined these pre-modern, pre-legal, pre-constitutional power relations into law. The lawyers who did this were often descendants of these very same aristocratic landowning families, so its obvious whose side they were really on.

This adds an interesting perspective on the libertarians’ “year zero” problem. They usually argue for a “night watchman” state that only protects private property rights and lets everything else just sort of work itself out. But where do these property rights come from in the first place? Why do some people have priority over others?

Matt Breunig makes this same point:

Perhaps the most interesting thing about libertarian thought is that it has no way of coherently justifying the initial acquisition of property. How does something that was once unowned become owned without nonconsensually destroying others’ liberty? It is impossible. This means that libertarian systems of thought literally cannot get off the ground. They are stuck at time zero of hypothetical history with no way forward.

How Did Private Property Start (Jacobin)

Pistor’s book fills in some of the gaps in that process. It was through law that such priority rights established, and legal decisions usually favored certain stakeholders over others. These decisions can in no way always be said to be “fair.” Rather, she argues that they come from whatever legal arguments happen to carry the day in court.

During the sixteenth-century in England there was a legal dispute over who had better property rights to the land—the landlords or the commoners. Who had priority rights over the commons, the peasants or the aristocracy?

The commons was an area where multiple, overlapping stakeholders had multiple, overlapping claims, and those claims were balanced against one another for centuries by traditional customs without written legal precedent determining ownership. So whose claims would take priority if one side defected, and whose claims would be downgraded, or even dismissed?

At first, this dispute was conducted in a decentralized fashion in hundreds of sporadic conflicts all up and down the British Isles. The landlords attempted to assert their rights over the commons. The commoners rebelled. There was violence, breaking fences, and digging hedges. The “Diggers” were so-named because they dug under the fences and hedgerows planted by landlords to mark their territory.

Eventually, these disputes wound up in the courts. The attorneys—by and large children of the nobility—argued on behalf of the landlords. The landlords won. The argument that carried the day in the court was seniority—the landlords had the stronger claims to the land because their rights took precedent. In essence, they were there first. Another strike against commoners is that they were not organized into a single, coherent corporate entity, so unlike the landowner, they could not assert collective rights. They were simply seen as numerous private individuals by the courts. More recent scholarship has shown that by the early seventeenth century, already two-thirds of arable land in England had been enclosed even before the major Enclosure Acts were passed.

These decisions gave the landlords priority rights. But you also needed to have shielding devices to create sustainable value.

To this end, the landed elite in England learned how to entail their land to preserve it down through time. Lawyers took a page from feudal law and argued that the contracts that potential creditors had entered into were with the “life tenant” rather than the person who gained the profit off the land. The life tenant, however, was not the real owner, they said—he only holds the asset for future generations. Under the feudal law, this meant that you could only seize 50 percent of the land, and never the family mansion.

Entailment gave English landlords durability. When the land was no longer able to generate sufficient revenue thanks to the repeal of the Corn Laws and the flooding of the market with cheap grain, the landlords could shield themselves from creditors and keep land in the family. This caused a debtor crisis by the mid-nineteenth century. In 1881 the English courts declared that the the life tenant was the true owner and therefore creditors could seize all of the land. After the Land Settlement Act and the Land Conveyance Act were passed, almost 20 percent of land changed hands. The repeal of durability greatly affected the value of land as a capital asset.

When England started seizing lands from aboriginal peoples all over the world, obviously the “they were here first” argument wouldn’t hold water. So the attorneys switched up their arguments to improvement and discovery. Improvement is the argument made by John Locke, i.e. you combine your labor with the land to make it productive, so that gives you ownership rights to the land. Discovery is a sovereign territorial claim. It boils down to, essentially, “finders keepers.”

Note that this is the inverse of the priority rights that were argued during the enclosure movement. Under the feudal system, it was by-and-large the labor of the commoners that brought forth the fruits of the land. Yet back then, this gave them no special claims to ownership! Landlords had some legal and administrative duties back during the feudal era, but the Crown (i.e. the state) had largely taken over those functions by the time these disputes showed up in the courts. Thus, landlords contributed very little labor to the land, and yet they claimed exclusive ownership rights over it, and won in court!

And, of course, how does the labor theory of property apply to a financial asset?

In other words, what justifies one’s claim to an asset appears to be whatever the apologists for those with power argue it should be. And that obviously favors those already with the power.

Pistor highlights the fight by the indigenous Maya to encode their collective use rights as property rights in the Supreme Court of Belize. The constitution of Belize—as most constitutions do—says that property rights will be protected. But it does not define what counts as property—it simply assumes it. The supreme court of Belize eventually recognized the priority claims to the ancestral lands by the indigenous Maya. But what they didn’t do was use the state’s coercive power to back up those claims. Instead, Mayan land continued to be bought and sold by outsiders.

In telling this story, Pistor’s core point is this: what we recognize and what we do not recognize as property is a political decision that we make. In making these decisions the state tends to favor the rights of those who will generate more wealth for the state.

Since the end of the nineteenth century in Britain, we’ve shifted from protecting the landowners and their capital to protecting the credit claimants. By elevating creditor claims above all other claims, we have allowed financialization to occur. This has subsequently engendered all the “exotic” financial instruments based on debt that we see circulating today.

Pistor goes into detail about how these legal coding techniques were used to turn exotic financial instruments into capital assets via law. In doing so, the features of durability, universality, and convertibility became paramount in turning paper claims into capital assets. In fact, it was in trying to understanding the exotic financial instruments underlying the global financial crisis that she discovered the code of capital. For example, the “code” allowed the mortgage debts of millions of ordinary homeowners were turned into capital assets that could be traded and used for wealth generation. Her arguments here are fairly complex, so if interested you should look further into the book.

An important point she makes is that land and other tangible, usable objects are still usable in a certain way even without any legal coding. You can grow crops on land. You can drive a tractor. You can milk a cow.

However, intellectual property rights and financial assets only exist in law. These are entirely creations of the law.

And so, she notes, we’ve created a legal system where we create brand new assets ex nihilo through the law, and then further enhance these assets with the additional attributes of priority, durability, universality, and convertibility to turn them into wealth-generating assets. Of course, this benefits certain people over others.

Finally, she addresses the issue of universality. Private law is domestic law, but we live in a global capitalist system. And so how can you have domestic law sustaining a system of global capitalism when we don’t have a global state?

Pistor argues that as long as all global states choose to recognize the features of a specific legal system, you can, in theory, have legal universality even without a single, global legal system. For financial capitalism, the legal systems that currently serve this purpose are the laws of England, the state of New York, and Delaware for corporate law. Thus globalization turns out to be a very parochial system of coding rooted in just two legal systems! This gives Anglo-Saxon firms a legal advantage in crafting these types of assets, including financial assets. The world’s largest and most powerful law firms are all headquartered in Anglo-Saxon countries, where most of the legal coding work is done.

“The globalization of legal practice which is the very foundation of our global capitalist system is ultimately a globalization of Anglo-Saxon, particularly American legal practices.”

What started centuries ago in land has been extended to corporations, to financial assets, to intellectual property rights, to data, and potentially to many more things. As she notes, even exotic things like DNA are being eyed as potential capital assets.

As a result, citizens of various states increasingly feel as if they’ve lost control of their own domestic destiny. With everything around them being rapidly turned into capital assets for international markets left and right, they feel helpless. They feel that collective self-governance has fallen by the wayside under this system. Pistor points out that Brexit was rooted in the idea that the people have lost their legal sovereignty. She argues that this perception was essentially correct, but it was not really a takeover by Brussels (the EU headquarters) but more accurately a takeover by London.

The Code of Capital illustrates that the neofeudal order that is coalescing today is not some inevitable force of nature, but an imposition of a specific legal code on all of us to turn the entire world into capital assets owned and traded by an international oligarchy of wealth, while local communities are steadily hollowed out. It is the endgame of global capitalism; the final gutting of civil society. Despite the assertion of neoliberals and libertarians, there is nothing “natural” about it. It is blatantly obvious for whom the state’s monopoly on coercive violence is now serving, and its not the citizens of the world’s various counties, but a transnational investor elite.

Ive taken the above information from YouTube talks and interviews given by professor Pistor.

Talk at the Watson Institute: https://www.youtube.com/watch?v=m81pkJs5fcY

Talk in Brussels: https://www.youtube.com/watch?v=UwsJhnOmebM

Majority Report interview: https://youtu.be/yArlk9a–ck

Book review from the London School of Economics

The Demise of Kinship in Europe

We’ve talked extensively about how the basic constituent of human society is the extended kinship group. In many parts of the world, this is still the default form of human social organization. If there is any “natural” form of human social organization discernible from evolutionary biology, this is it.

From it all the basic structures of traditional societies are derived: religion, politics, law, marriage, inheritance, etc. We’ve frequently mentioned Henry Sumner Maine’s book, Ancient Law. The entire book can be summed up in the following passages:

[A]rchaic law … is full, in all its provinces, of the clearest indications that society in primitive times was not what it is assumed to be at present, a collection of *individuals*. In fact, and in the view of the men who composed it, it was an *aggregation of families*. The contrast may be most forcibly expressed by saying that the *unit* of an ancient society was the Family, of a modern society the Individual.

We must be prepared to find in ancient law all the consequences of this difference. It is so framed as to be adjusted to a system of small independent corporations. It is therefore scanty, because it is supplemented by the despotic commands of the heads of households. It is ceremonious, because the transactions to which it pays regard resemble international concerns much more than the quick play of intercourse between individuals. Above all it has a peculiarity of which the full importance cannot be shown at present.

It takes a view of *life* wholly unlike any which appears in developed jurisprudence. Corporations *never die*, and accordingly primitive law considers the entities with which it deals, i. e. the patriarchal or family groups, as perpetual and inextinguishable. This view is closely allied to the peculiar aspect under which, in very ancient times, moral attributes present themselves.

The moral elevation and moral debasement of the individual appear to be confounded with, or postponed to, the merits and offences of the group to which the individual belongs. If the community sins, its guilt is much more than the sum of the offences committed by its members; the crime is a corporate act, and extends in its consequences to many more persons than have shared in its actual perpetration. If, on the other hand, the individual is conspicuously guilty, it is his children, his kinsfolk, his tribesmen, or his fellow-citizens, who suffer with him, and sometimes for him.

It thus happens that the ideas of moral responsibility and retribution often seem to be more clearly realised at very ancient than at more advanced periods, for, as the family group is immortal, and its liability to punishment indefinite, the primitive mind is not perplexed by the questions which become troublesome as soon as the individual is conceived as altogether separate from the group.

https://oll.libertyfund.org/titles/2001#Maine_1376_90 (italics in original, emphasis mine)

On the difference between laws based on lone individuals, and laws based on social groups, he writes:

…It will be observed, that the acts and motives which these theories [of jurisprudence] suppose are the acts and motives of Individuals. It is each Individual who for himself subscribes the Social Compact. It is some shifting sandbank in which the grains are Individual men, that according to the theory of Hobbes is hardened into the social rock by the wholesome discipline of force…

But Ancient Law, it must again be repeated, knows next to nothing of Individuals. It is concerned not with Individuals, but with Families, not with single human beings, but groups. Even when the law of the State has succeeded in permeating the small circles of kindred into which it had originally no means of penetrating, the view it takes of Individuals is curiously different from that taken by jurisprudence in its maturest stage. The life of each citizen is not regarded as limited by birth and death; it is but a continuation of the existence of his forefathers, and it will be prolonged in the existence of his descendants…

https://oll.libertyfund.org/titles/2001#Maine_1376_164

As we saw last time, these are called identity rules, as opposed to personal rules, which deal mainly with specific, unique, individuals; and general rules, which theoretically apply to everyone equally, regardless of one’s rank, kinship group, ethnic background, religious beliefs, wealth, or any other intrinsic characteristic.

Last time we saw that general rules came about because it became impossible for rulers to sort people by religion after the Catholic Church fragmented, despite numerous failed attempts by “all the king’s horses and all the king’s men” to put Humpty Dumpty back together again. Religious minorities began springing up all over Europe like mushrooms after a rain, challenging the old ways of ruling. Martin Luther only wanted to reform the universal Church; instead he broke it apart. Luther’s emphasis on a personal relationship with God through reading the Bible directly (something that was only possible in Early Modernity), meant that the intermediaries between man and God—the Church and priesthood—saw their power and influence diminish. This, in turn, empowered ambitious Early Modern rulers.

General rules supplanted the ancient laws described by Maine above, leading to a more fragmented and individualistic society. This, in turn, allowed for the commodification of land and labor which is necessary for capitalism. For example, the selling off of the monasteries seems to have kickstarted off the first large real estate markets in England. As Maine argued, status became replaced by contract; Gemeinschaft became supplanted by Gesellschaft.

But, in reality, individualism in Europe was under way long before that.

Wither Tribes?

Europe has long shown a curious lack of extended kinship groups, that is, tribes. If you’ve read Roman history, you know that the Western Empire came under pressure by large migrations of tribal peoples that we subsume under the label “Germanic”, due to their languages, along with some other exotic breeds like the Asiatic Huns (the ancestors of modern-day Hungarians). Their tribal structure, from what little we can determine, seems to have been quite similar to tribal peoples the world over, including in North America, Africa, and Asia.

Location of the Germanic tribes on the border of the Roman Empire before the Marcomannic Wars ca. 50AD by Karl Udo Gerth (2009)
Location of the Germanic tribes on the border of the Roman Empire before the Marcomannic Wars ca. 50AD by Karl Udo Gerth (2009) [Source]
I’m sure you can recall the names of some of them: the Lombards, the Alemanni, the Burgundians, the Lombards, the Visigoths, the Ostrogoths, the Frisians, the Angles and Saxons, the Beans and Franks, and many, many more. The Goths managed to devastate the Roman Empire despite their mopey attitudes and all black clothing, while the Vandals left spraypaint up and down the Iberian peninsula and down into North Africa.

As I said last time, ancient societies were collectivist by default. But this all changed, particularly in Western Europe. But why Europe? Why was Europe the apparent birthplace of this radically new way of life?

That’s the subject of the paper I’m discussing today, which has received a fairly large  amount of press attention. The paper itself is 178 pages—basically a small book (although much of that is data). The idea is that these extended kinship groups were broken up by the Roman Catholic Church via it’s strict prohibition against marriages between close kin, especially between cousins.

[A] new study traces the origins of contemporary individualism to the powerful influence of the Catholic Church in Europe more than 1,000 years ago, during the Middle Ages.

According to the researchers, strict church policies on marriage and family structure completely upended existing social norms and led to what they call “global psychological variation,” major changes in behavior and thinking that transformed the very nature of the European populations.

The study, published this week in Science, combines anthropology, psychology and history to track the evolution of the West, as we know it, from its roots in “kin-based” societies. The antecedents consisted of clans, derived from networks of tightly interconnected ties, that cultivated conformity, obedience and in-group loyalty—while displaying less trust and fairness with strangers and discouraging independence and analytic thinking.

The engine of that evolution, the authors propose, was the church’s obsession with incest and its determination to wipe out the marriages between cousins that those societies were built on. The result, the paper says, was the rise of “small, nuclear households, weak family ties, and residential mobility,” along with less conformity, more individuality, and, ultimately, a set of values and a psychological outlook that characterize the Western world. The impact of this change was clear: the longer a society’s exposure to the church, the greater the effect.

Around A.D. 500, explains Joseph Henrich, chair of Harvard University’s department of human evolutionary biology and senior author of the study, “the Western church, unlike other brands of Christianity and other religions, begins to implement this marriage and family program, which systematically breaks down these clans and kindreds of Europe into monogamous nuclear families. And we make the case that this then results in these psychological differences.”

Western Individualism Arose form the Incest Taboo (Scientific American)

The medieval Catholic Church may have helped spark Western individualism (Science News)

Although reported as if it were some sort of new discovery, this concept is hardly new. In fact, this hypothesis has been around for quite a long time—since at least the 1980’s. Francis Fukuyma’s book, “The Origins of Political Order,” even has a chapter entitled, “Christianity Undermines the Family,” where he expounds this hypothesis in detail. As another example, the most popular post on the notorious hbd chick’s blog is entitled, whatever happened to european tribes? (hbd chick does not use capital letters), and dates from 2011. She quotes a paper from Avner Grief (whom we met last time): “Family structure, institutions, and growth – the origin and implications of Western corporatism”.

“The medieval church instituted marriage laws and practices that undermined large kinship groups. From as early as the fourth century, it discouraged practices that enlarged the family, such as adoption, polygamy, concubinage, divorce, and remarriage. It severely prohibited marriages among individuals of the same blood (consanguineous marriages), which had constituted a means to create and maintain kinship groups throughout history. The church also curtailed parents’ abilities to retain kinship ties through arranged marriages by prohibiting unions in which the bride didn’t explicitly agree to the union.

“European family structures did not evolve monotonically toward the nuclear family nor was their evolution geographically and socially uniform. However, by the late medieval period the nuclear family was dominate. Even among the Germanic tribes, by the eighth century the term family denoted one’s immediate family, and shortly afterwards tribes were no longer institutionally relevant. Thirteenth-century English court rolls reflect that even cousins were as likely to be in the presence of non-kin as with each other.

Hbd chick speculates as to why this might be the case (again, no caps for her):

the leaders of the church probably instituted these reproductive reforms for their own gain — get rid of extended families and you reduce the number of family members likely to demand a share of someone’s legacy. in other words, the church might get the loot before some distant kin that the dead guy never met does. (same with not allowing widows to remarry. if a widow remarries, her new husband would inherit whatever wealth she had. h*ck, she might even have some kids with her new husband! but, leave her a widow and, if she has no children, it’s more likely she’ll leave more of her wealth to the church.)

but, inadvertently, they also seem to have laid the groundwork for the civilized western world. by banning cousin marriage, tribes disappeared. extended familial ties disappeared. all of the genetic bonds in european society were loosened. society became more “corporate” (which is greif’s main point).

whatever happened to european tribes (hbd chick)

Cousin Marriage? Ewww!

Now, for us Westerners, the idea of marrying your cousin is kind of gross (which might be an additional confirmation of the thesis). If you’re in the United States, jokes about marrying your cousin and inbreeding are common to use against people living in Appalachia. The movie Deliverance cemented this in the popular consciousness.

But if you know anything about anthropology, you know that cousin marriage isn’t all that uncommon around the world; in fact in some societies it’s considered the most desirable match! Societies use kinship terms to distinguish between parallel and cross-cousins. In most societies, cross-cousin marriage is okay (maybe even preferred), but parallel cousin marriage is a no-no. That’s why the term for “sibling” in many languages often encompasses parallel cousins. That is, marrying your parallel cousin is the same as marrying your brother or sister, i.e. it’s incest. What the Church did, then, was greatly expand the definition of incest:

In many societies, differentiated cousin-terms are presriptive of the people one can/should or is forbidden to marry. For example, in the Iroquois kinship terminology, parallel cousins (e.g. father’s brother’s daughter) are likewise called brother and sister–an indication of an incest taboo against parallel cousin-marriage. Cross-cousins (e.g. father’s sister’s daughter) are termed differently and are often preferred marriage partners. [1]

And, of course, the choice of marriage partners in a hyper-localized world with basically nothing in terms of mass communication and very little in the way of long-distance transportation would have been much more restricted than we are used to. The simple invention of the bicycle in the 1800s caused marriage partners to become more differentiated:

The likelihood of finding a suitable marriage partner depends not only on the degree to which one becomes acquainted with the possible marriage partners in a region but also on the changing boundaries of what constitutes a region. A great many studies, on all parts of the globe, have demonstrated that most people tend to marry someone living close by. On foot in accessible terrain – that is, no mud, rivers, mountains, and gorges – one can perhaps walk 20 kilometers [12.4 miles] to another village and walk the same distance back on the same day.

This distance comes close to the limit of trust that separated the known universe from the “unsafe” world beyond. If marriage “horizons” expanded, young suitors would be able to meet more potential marriage partners. The increase in the means and speed of transportation brought about by new and improved roads and canals, and by new means of transport such as the train, the bicycle, the tram, and the motorcar brought a wider range of potential spouses within reach. These new means of transport increased the distance one could travel during the same day, and thus expanded the geographical marriage horizon. [2]

Arranged marriages between kin are designed to keep land and wealth in the same extended kinship lines, rather than breaking them up or turning them over to other families. In societies where lineages are ranked, losing such land and property means a downgrade in social status. That’s why you get extreme versions like sibling marriage in ancient Egypt (with the associated birth defects). Even in fairly modern times, European royalty had a very small pool of suitable marriage partners to choose from (Prince Philip is, in fact, a distant cousin of Queen Elizabeth—no jokes about Prince Charles, please).

Impact of Europe’s Royal Inbreeding: Part II

Although in the modern, developed world, cousin marriage is fairly rare, it’s somewhat more common in societies which are often labelled “traditional”. It does occur among some communities even in the West, however: Did my children die because I married my cousin? (BBC). And I’ve always found a great irony in the fact that Darwin himself married his first cousin.

So, for anthropologists, the prohibition against cousin marriage is a big deal.

WEMP and HL

Anthropologists and historians also discern a different and distinct marriage pattern in medieval Western Europe from much of the rest of the contemporary world; distinct enough to merit the uninspired name of the Western European Marriage Pattern (WEMP). It’s distinctive features are:

– Strict monogamy, i.e. no polygyny. We think of this as normal, but in terms of sheer numbers, most cultures have been polygynous (one man being able to marry multiple wives). Monogamy was the norm for Indo-European cultures even before Christianity (e.g ancient Greece, Rome, India).

– Relatively late age of first marriage. Many cultures married off women at puberty or shortly thereafter – anywhere from 13 to 16 years old. This was seen as necessary in an era of high infant and maternal mortality. But in Europe, both men and women married much later—often in their late twenties, or even older for men. Also, the difference in ages between men and women was slight—typically only a few years. Yet in many parts of the world even today, very young women will be married off to prestigious men who are old enough to be their grandfather! Some people, of course, still lament this change, specifically Judge Roy Moore and everyone involved with Jeffrey Epstein.

– Divorce was difficult to obtain. Marriage was seen as a lifetime commitment, and divorce was accordingly hard to get – just ask Henry the Eighth. Of course, given higher mortality rates – especially in childbirth – in practice this meant “till death do us part” was less of a commitment back then. Today we practice serial monogamy – one partner at a time, but less of a lifetime commitment.

– Many people not marrying at all. See: medieval singlewomen.

– Marriage was voluntary on the part of the woman. No forced marriages here (unless it was to secure some kind of political alliance).

– Fewer children. Rather than just pump out a litter, European couples had fewer children, yet the population still grew overall. No one is quite sure why, but the relatively high status of women may have had something to do with it. Of course, it’s harder to have a large number of children with just one wife, although some people like J.S. Bach managed to do it. As Wikipedia summarizes, “women married as adults rather than as dependents, often worked before marriage and brought some skills into the marriage, were less likely to be exhausted by constant pregnancy, and were about the same age as their husbands.”

– Neolocal households and “nuclear” families. Leaving your parents’ household and establishing your own separate household is, again, fairly standard for us Westerners, but in many places it is atypical. Married couples often live with their extended families in much of the rest of the world: Africa, Asia, Oceania, etc. Even in eastern Europe it was fairly common for couples to live in an extended family household under the control of a patriarch (leading to all sorts of drama). Speaking of Eastern Europe:

The reason it’s called the *Western* European Marriage Pattern is because there is an imaginary line dividing it from the rest of the continent. The divergence in marriage patterns and inheritance practices was discovered by a demographer called John Hajnal, and hence it is called the Hajnal Line (HL). It runs roughly from Trier to St. Petersburg. Some areas of Western Europe, such as Ireland and parts of southern Europe, are also “outside” the Hajnal line as well.

To the west of the Hajnal line, about half of all women aged 15 to 50 years of age were married at any given time while the other half were widows or spinsters; to the east of the line, about seventy percent of women in that age bracket were married at any given time while the other thirty percent were widows or nuns.

The marriage records of Western and Eastern Europe in the early 20th century illustrate this pattern vividly; west of the Hajnal line, only 25% of women aged 20–24 were married while to the east of the line, over 75% of women in this age group were married and less than five percent of women remained unmarried. Outside of Europe, women could be married even earlier and even fewer would remain celibate; in Korea, practically every woman 50 years of age had been married and spinsters were extremely rare, compared to 10–25% of women in western Europe age 50 who had never married.

Western European Marriage Pattern (Wikipedia)

Exposure to the Church

The idea is that the difference was brought about by the actions of the Catholic Church. More exposure to the Church meant weaker families and less kinship ties; less exposure meant that the “default” extended family system was maintained.

Furthermore, there are some ideas that follow from that:

– Western Europeans have weaker family ties.

– Western Europeans have a greater sense of individualism and independent thinking, and a correspondingly higher tolerance for deviants and misfits than other cultures.

– Both of these traits were crucial for the development of capitalism.

The idea is that, since extended kinship groups and tribes disappeared, inclusive institutions were formed in Europe by necessity rather than elsewhere. These inclusive institutions, as we saw last time, were critical for the development of general rules and Liberalism. Those developments, in turn, allowed for disruptive institutions of capitalism, as described by Marx, to rework social relations: “all that is solid melts into air.” Those developments led Western Europe to subsequently dominate the modern world. For example, this paper from 2017 by one of the new paper’s co-authors advances the hypothesis that institutional developments gave Western Europe the edge:

Why did Europe pull ahead of the rest of the world? In the year 1000 AD many regions like China or the Middle Easter [sic] were more advanced than Europe. This paper contributes to this debate by testing the hypothesis that the Churches’ [sic] medival [sic] marriage regulations constituted an important precondition for Europe’s exceptional economic development by fostering inclusive institutions. In the medieval period, Churches instituted marriage regulations (most prominently banning kin-marriages) that destroyed extended kin-networks. This allowed the formation of a civic society and inclusive institutions. Consistent with the idea that those marriage regulations were an important precondition for Europe’s institutional development, I present evidence that Western Church exposure already fostered the formation of city level inclusive institutions before 1500 AD

An important building block of the argument is that extended kin networks are detrimental to the formation of a civic society and inclusive institutions. The European kin-structure is unique in the world with the nuclear family dominating and kin marriages are almost absent. In parts of the world, first and second cousin-marriages account for more than 50 percent of all marriages. Kin-marriages lead to social closure and create much tighter family networks compared to less fractionalized societies where the nuclear family dominates for biological, sociological, and economic reasons: kin-selection predicts that the implied higher genetic relatedness increases altruistic behavior towards kin, kin-marriages decrease interaction with and therefore trust in outsiders, and they change economic incentives: supporting one’s nieces and nephews simultaneously benefits the prospective spouses of one’s own children. More importantly, though, in the absence of a supra-level inclusive institutions [sic], the family provides protection and insurance creating a stable equilibrium where individual deviation from loyalty demands is costly. Excessive reliance on the family, nepotism, and other contingencies of strong extended kin-groups in turn impede social cohesion and the formation of states with inclusive institutions.

In line with Acemoglu, Johnson, Robinson and Yared’s notion of critical junctures this paper provides evidence that the Churches’ marriage regulations changed Europe’s social structure by pushing it away from a kin-based society, and paved the way for Europe’s special developmental path. The Churches’ marriage regulations – most prominently the banning of consanguineous marriages (“marriages of the same blood”) – were starting to be imposed in the early medieval ages. Backed by secular rulers, this ban was accompanied by severe punishment of transgressions and was very comprehensive – the Western Church at times prohibited marriages up to the seventh degree of relatedness (that is, marriage between two people sharing one of their 128 great-great-great-great-great-grandparents). Clearly it was impossible to trace and enforce the ban to this degree, yet it demonstrates its severity. The eastern Church also banned cousin-marriage but never to the same extent (providing variation within Christian countries). [3]

I remember reading an anecdote from Jared Diamond a while ago, and I can’t remember whether it was in an interview or in one of his books (I wish I could find it). He was describing how someone in the village in Papua New Guinea where he was staying wanted to open an ice-cream shop to bring to glory of ice cream to the rest of the village. But this fellow ran into a small problem. In small villages in tribal New Guinea, everyone is basically related to everyone else in some way, however remote. When the budding entrepreneur tried to charge his cousins for an ice cream cone, they reacted with indignity. Charging your relatives for something was considered a severe faux pas! The village was still primarily a reciprocal gift economy. They simply could not get their heads around their concept that they had to pay for stuff. In the end, he could either make a profit or alienate everyone else in the village whom he depended upon. The ice cream shop folded.

Why did the church do this? The authors speculate that it may have been less about scripture, and more self-serving:

…the church’s focus on marriage proscriptions rose to the level of obsession. “They came to the view that marrying and having sex with these relatives, even if they were cousins, was something like sibling incest in that it made God angry,” he says. “And things like plagues were explained as a consequence of God’s dissent.”

The taboo against cousin marriage might have helped the church grow, adds Jonathan Schulz, an assistant professor of economics at George Mason University and first author of the paper. “For example,” he says, “it is easier to convert people once you get rid of ancestral gods. And the way to get rid of ancestral gods is to get rid of their foundation: family organization along lineages and the tracing of ancestral descent.”

Western Individualism Arose form the Incest Taboo (Scientific American)

While the Hajnal line was discovered back in 1965, it was unknown why marriage was so different west of the line than east of it until a 1983 book by Jack Goody called “The Development of the Family and Marriage in Europe.” Goody was an anthropologist who specialized in marriage customs and inheritance patterns around the world—things like dowries, bridewealth, primogeniture, partiable inheritance, etc. From his study of Medieval Europe, following Hajnal’s discoveries, he was the first to put forward the idea that the Catholic Church’s prohibitions were the critical factor in the demise of the tribal structures and the subsequent rise of Western individualism. This is from Fukuyama’s Origins of Political Order:

Goody notes that the distinctive Western European marriage pattern began to branch off from the dominant Mediterranean pattern by the end of the Roman Empire, The Mediterranean pattern, which included the Roman gens, was strongly agnatic or patrilineal, leading to the segmentary organization of society. The agantic group tended to be endogamous, with some preference for cross-cousin marriage. There was a strict separation of the sexes and little opportunity for the women to own property or participate in the public sphere. The Western European pattern was different in all these respects: inheritance was bilateral; cross-cousin marriage was banned and exogamy promoted; and women had greater rights to property and participation in public events.

The shift was driven by the Catholic church, which took a strong stand against four practices: marriages between close kin, marriages to the widows of dead relatives (the so-called levirate), the adoption of children, and divorce. The Venerable Bede, reporting on the efforts of Pope Gregory I to convert the pagan Anglo-Saxons to Christianity in the sixth century, notes how Gregory explicitly condemned the tribe’s practices of marriage to close relatives and the levirate. Later church edicts forbade concubinage, and promoted an indissoluble, monogamous lifetime marriage bond between men and women.

…The reason that the church took this stand, in Goody’s view, had much more to do with the material interests of the church than with theology. Cross-cousin marriage (or any other form of marriage between close relatives), the levirate, concubinage, adoption, and divorce are a what he labels “strategies of kinship” whereby kinship groups are able to keep property under the group’s control as it is passed down from one generation to another….the church systematically cut off all available avenues had for passing down property to descendants. At the same time, it strongly promoted voluntary donations of land and property to itself. The church stood to benefit materially from an increasing pool of property-owning Christians who died without heirs.

The relatively high status of women in Western Europe was an accidental by-product of the church’s self-interest. The church made it difficult for a widow to remarry within the family group an thereby reconvey her property back to the tribe, so she had to own the property herself. A woman’s right to own property and dispose of it as she wished stood to benefit the church, since it provided a large source of donations from childless widows and spinsters. And the woman’s right to own property spelled the death knell of agantic lineages, by undermining the principle of unilateral descent.

The Catholic church did very well financially in the centuries following these changes in the rules…By the end of the seventh century, one-third of the productive land in France was in ecclesiastical hands, between the eighth and ninth centuries, church holdings in northern France, the German lands, and Italy doubled….The church thus found itself a large property owner, running large manors and overseeing the economic production of serfs throughout Europe. This helped the church in its mission of feeding the hungry and caring for the sick, and it also made possible a vast expansion of the priesthood, monasteries, and convents. But it also necessitated the evolution of an internal managerial hierarchy and set of rules within the church itself that made it an independent political player in medieval politics. [4]

Despite all this, it remained just a speculative hypothesis, and remained unproven. What Henrich et. al’s paper does is amalgamate a large amount of interdisciplinary data to try and back up the hypothesis. Their idea is that such prohibitions wold have altered the cultural behavior of those societies relative to the ones around them, and that cultural behavior can be detected through things like church records, the use of intermediary financial instruments, the frequency of blood donations, and even unpaid parking fines. By establishing a correlation between Church exposure and these sorts of socio-cultural behaviors, they argue, we can see the roots of the cultural differences between the rest of the world, and what they term WEIRD cultures: Western, Educated, Industrial, Rich, and Democratic.

In the course of their research, Henrich and his colleagues created a database and calculated “the duration of exposure” to the Western church for every country in the world, as well as 440 “subnational European regions.” They then tested their predictions about the influence of the church at three levels: globally, at the national scale; regionally, within European countries; and among the adult children of immigrants in Europe from countries with varying degrees of exposure to the church.

In their comparison of kin-based and church-influenced populations, Henrich and his colleagues identified significant differences in everything from the frequency of blood donations to the use of checks (instead of cash) and the results of classic psychology tests—such as the passenger’s dilemma scenario, which elicits attitudes about telling a lie to help a friend. They even looked at the number of unpaid parking tickets accumulated by delegates to the United Nations…In their analysis of those tickets, the researchers found that over the course of one year, diplomats from countries with higher levels of “kinship intensity”—the prevalence of clans and very tight families in a society—had many more unpaid parking tickets than those from countries without such history.

The West itself is not uniform in kinship intensity. Working with cousin-marriage data from 92 provinces in Italy (derived from church records of requests for dispensations to allow the marriages), the researchers write, they found that “Italians from provinces with higher rates of cousin marriage take more loans from family and friends (instead of from banks), use fewer checks (preferring cash), and keep more of their wealth in cash instead of in banks, stocks, or other financial assets.” They were also observed to make fewer voluntary, unpaid blood donations.

Western Individualism Arose form the Incest Taboo (Scientific American)

This builds on Henrich’s previous finding that such WEIRD cultures score differently on certain psychological tests than people in cultures in the rest of the world. That paper was a widely-cited bombshell. For years, psychology studies confined themselves to Western Europeans, particularly undergraduate college students where the studies were carried out. It was simply assumed that people thought pretty much the same way everywhere, and therefore Western college students could safely be used as a stand in for humans more generally.

Henrich, an anthropologist, took those studies and gave them to people from diverse tribal peoples around the world, which, remarkably, hadn’t been done before. The results he got indicated that using Westerners—particularly rich, well-educated ones—as stand-ins for the entire human race in psychological tests was fundamentally flawed. We are, in fact, outliers when it comes to human behavior. This has profound implications for economics and sociology.

The weirdest people in the world? (PDF)

If Westerners really are different, then why is that? This paper attempts to answer the question.

Kinship vs. Capitalism

Both Max Weber and Karl Marx realized that the destruction of large corporate kinship groups and the separation between the household and the market economy were the prerequisites for later capitalist production. Both traced this change to sometime between the sixteenth and the nineteenth centuries. Weber focused on the culture of Protestantism as the cause, while Marx focused on the changing methods of economic production during the time period, such as Enclosure movement and subsequent explosion of rootless wage laborers. Weber’s ideas were later expanded by sociologist Talcott Parsons. Karl Polanyi also traced the change to Market Society from householding economies and cottage industries to this time frame.

However, a very influential book called “The Origins of English Individualism,” by Alan Macfarlane argued that England was basically an individualistic culture by 1250—long before its continental neighbors.

By shifting the origins of capitalism well before the Black Death, we alter the nature of a number of other problems. One of these is the origin of modern individualism.Those who have written on the subject have always accepted the Marx-Weber chronology. For example, David Riesman assume that modern individualism emerged out of an older collectivist, “traditional-directed” society, in the fifteenth and sixteenth centuries. Its growth was directly related to the Reformation, Renaissance and the break-up of the old feudal world. The “inner-directed” stage of intense individualism occurred in the period between the sixteenth and nineteenth centuries. Though a recent general survey of historical and philosophical writing on individualism concedes that some of the roots lie deep in classical and biblical times and also in medieval mysticism, still, in general, it stresses the Renaissance, Reformation and the Enlightenment as the period of great transition. Many of the strands of political religious, ethical, economic and other types of individualism are traced back to Hobbes, Luther, Calvin and other post-1500 writers.

Yet, if the present thesis is correct, individualism in economic and social life is much older than this in England. In fact within the recorded period covered by our documents, it is not possible to find a time when an Englishman did not stand alone. Symbolized and shaped by his ego-centered kinship system, he stood in the center of the world. This means that it is no longer possible to “explain” the origins of English individualism in terms of either Protestantism, population change, the development of a market economy at the end of the middle ages, or other factors suggested by the writers cited. Individualism, however defined, predates sixteenth-century changes and can be said to shape them all. The explanation must lie elsewhere, but will remain obscure until we trace the origins back even further than attempted in this work. [5]

Macfarlane claims that already by the thirteenth century, the evidence indicates that England was no longer what he terms a “peasant society,” or what we’ve been referring to as a “traditional society.” Even way back then, he says, England had many characteristics in common with later capitalist societies than with more traditional ones: freeholding of land, wage labor, free choice of marriage partners, individual inheritance, alienable property, geographical and social mobility, and so forth. From a review of the book:

The bulk of this short book is taken up by attempting to demonstrate that the characteristics of peasant society did not apply to England from the thirteenth century onward…In peasant societies land is not individualized but is held by the entire family through time and is seldom sold, since it is greatly revered; in England from the twelfth century onward, land was held by individuals (both men and women) and was often sold to nonfamily members, especially since geographical mobility of families was high and since children were sometimes disinherited.

In peasant societies the unit of ownership (the joint family) is also the unit of production and consumption; in England at the time the nuclear family (rather than the stem or joint family) was predominant, and the children often worked as servants for other families, rather than for their own families.

In peasant society, the families are economically almost self-sufficient, production for the market is small, and cash is scarce; in England at that time, the economy was highly monetized, agricultural production for the market was important, and the existence of elaborate books of accounts of farms attest to their “rational” attitudes toward money making (there was even money-lending for interest in rural areas).

In peasant societies there is a certain income and social equality between families that work on the land and a large gap in income and social status stands between them and other social groups, so that little mobility occurs between classes; in England at that time, considerable differentiation of wealth among the rural workers could be found and, in addition, some mobility between classes occurred.

Finally, in peasant societies women have a low age of marriage, their marriage partners are selected for them, and few remain unmarried; in England at that time, women apparently had a moderate age of marriage, selected their own partners, and, in many cases, did not marry at all. [6]

What this means is that the sociologists and economic historians who use England as the exemplar of a transition from a feudal, peasant society to a capitalist one are looking in the wrong time period! the transition took place long before the time they were examining, as Macfarlane explains:

…if we are correct in arguing that the English now have roughly the same family system as they had in about 1250, the arguments concerning kinship and marriage as a reflection of economic change become weaker. To have survived the Black Death, the Reformation, the Civil War, the move to the factories and the cities, the system must have been fairly durable and flexible. Indeed, it could be argued that it was its extreme individualism, the simplest form of molecular structure, which enabled it to survive and allowed society to change. Furthermore, if the family system pre-existed, rather than followed on industrialization, the causal link may have to be reversed, with industrialization as a consequence, rather than a cause, of the basic nature of the family. [7]

Macfarlane’s book did not answer the question as to why the English were so different from the rest of the continent (for additional criticism, see this [PDF]). However, beginning with Goody’s book, attention became focused on the efforts of the Catholic Church to break up kin groups in Anglo-Saxon England. This may have been where the practice began, as Henrich noted in a 2016 interview with Tyler Cowen:

When the church first began to spread its marriage-and-family program where it would dissolve all these complex kinship groups, it altered marriage. So it ended polygyny, it ended cousin marriage, which stopped the kind of . . . forced people to marry further away, which would build contacts between larger groups. That actually starts in 600 in Kent, Anglo-Saxon Kent.

Missionaries then spread out into Holland and northern France and places like that. At least in terms of timing, the marriage-and-family program gets its start in southern England.

Joseph Henrich on Cultural Evolution, WEIRD Societies, and Life Among Two Strange Tribes (Conversation with Tyler)

This might explain why Anglo-Saxon culture is so manifestly different than other cultures, with its emphasis on individualism, hustling, shallow social ties, and “making your own way.” This was further cemented by the fact that England was conquered by a foreign people in 1066—the Normans—who inserted themselves as a new ruling strata above the local lords in the prevailing feudal system. As one of my readers pointed out, the Normans had contempt for those beneath them, so much so that they didn’t even bother to learn the local language of those they ruled over. The “Norman yoke” might be another ingredient in the origins of English attitudes toward individualism. As Brad DeLong put it, “The society of England becomes more unequal because William the Bastard from Normandy and his thugs with spears—300 families, plus their retainers—kill King Harold Godwinson, and declare that everyone in England owes him and his retainers 1/3 of their crop.” And besides, with such a hodgepodge of peoples—Normans entering an already multicultural society of Angles, Saxons, Jutes, Danes, various Celts, and so forth—it’s hard to see how a tribal society could have persisted without strict prohibitions against intermarriage in any event on one small island given the circumstances (for example, Japan has a similar lack of tribes, except for minorities like the Ainu people).

The feudal system, with its emphasis on contractual obligations, was itself a substitute for the tribal solidarity that by that time had already been eroded. Henry Maine argued that feudalism was an amalgamation of earlier tribal customs with imported Roman legal systems of voluntary contract:

Feudalism…was a compound of archaic barbarian usage with Roman law…A Fief was an organically complete brotherhood of associates whose proprietary and personal rights were inextricably blended together. It had much in common with an Indian Village Community and much in common with a Highland clan. But…the earliest feudal communities were neither bound together by mere sentiment nor recruited by a fiction. The tie which united them was Contract, and they obtained new associates by contracting with them...The lord had many of the characteristics of a patriarchal chieftain, but his prerogative was limited by a variety of settled customs traceable to the express conditions which had been agreed upon when the infeudation took place.

Hence flow the chief differences which forbid us to class the feudal societies with true archaic communities. They were much more durable and much more various…more durable, because express rules are less destructible than instinctive habits, and more various, because the contracts on which they were founded were adjusted to the minutest circumstances and wishes of the persons who surrendered or granted away their lands.

The medieval historian Mark Bloch also noted that feudalism was a substitute for earlier social ties which had been abandoned:

Yet to the individual, threatened by the numerous dangers bred by an atmosphere of violence, the kinship group did not seem to offer adequate protections, even in the first feudal age. In the form in which it then existed, it was too vague and variable in its outlines, too deeply undermined by the duality of descent by male and female lines. That is why men were obliged to seek or accept other ties. On this point history is decisive, for the only regions in which powerful agnatic groups survived–German lands on the shores of the North Sea, Celtic districts of the British Isles–knew nothing of vassalage, the fief and the manor. The tie of kinship was one of the essential elements of feudal society; its relative weakness explains why there was feudalism at all. [8]

I should note that medieval guilds were also a response to this need for security; some historians of guilds trace their ancestry back to frith gilds, which were brotherhoods explicitly established for protection and defense.

And so, a society governed by explicit contracts and legal institutions centered around individuals became the norm in Western Europe far before the rest of the world. In the patchwork quilt of post-Roman Europe, some areas escaped infeudation altogether and retained elements of older, more traditional social orders. It was these remote communities that were studied in the late nineteenth century in order to uncover the lost world of Europe’s past tribal organization (for example, in Laveleye’s Primitive Property [9]). In other parts of Europe, feudal contracts took a myriad of alternative forms as Maine noted above—so much so that medieval historians today dislike even using the term feudalism to describe the political arrangements of this time period, because the contracts themselves were so varied. They often note that what we call feudalism was hardly one monolithic system. But it does seem as though the specific arrangements of feudalism from country to country determined the subsequent and divergent paths that various Western European countries would take. In a paper entitled,  “English Feudalism and the Origins of Capitalism,” political scientist George Comninel argues that the specifics of English feudalism allowed capitalism to develop there, rather than in neighboring France:

The specific historical basis for the development of capitalism in England- and not in France – is ultimately to be traced to the unique structure of English manorial lordship. It is the absence from English lordship of the seigneurie banale – the political form of parcellised sovereignty which was central to the development of Continental feudalism – that can be seen to account for the peculiarly ‘economic’ turn taken in the development of English class relations of surplus extraction. The juridical and economic social relations necessary for capitalism were forged in the crucible of a peculiarly English form of feudal class society.

In France, by contrast, the distinctly political tenor of social development – visible in the rise of the absolutist state, in the intensely political character of the social conflict of the Revolution, and as late as the massively bureaucratic Bonapartist state of the Second Empire – can be traced just as specifically to the centrality of seigneurie banale in the fundamental relations of feudalism.

The effects flowing from this initial basic difference in feudal relations include: the unique differentiation of freehold and customary tenures among English peasants, in contrast to the survival of allodial land alongside censive tenures of France; the unique development of English common law, rooted in the land, in contrast to the Continental revival of Roman law, based on trade; the unique commoner status of English manorial lords, in contrast to the Continental nobility; and, most dramatically, in the unique enclosure movement by which England ceased to be a peasant society – ceased even to have peasants – before the advent of industrial capitalism, in stark contrast with other European societies. [10]

Final Notes

I’ve banged on for too long already, so I’m just going to close with a few notes.

Unfortunately, many of the ideas I’ve written about above have been largely discussed in the context of white supremacy and racialism, and this research will give succor to those who believe that the “white race” is unique and therefore superior to all other people on earth.

I don’t think that’s the intent of the paper at all, although I am a little disturbed by the associations with George Mason University—the epicenter of the Koch Brothers’ takeover of a wide swath of economics. However, I’ll give them the benefit of the doubt for now.

While the racialist and HBD moments online are determined to reduce everything to genes (in a perverse inverse of blank slatism), it seems to me that these are cultural developments more than anything else, and are worth studying.

The desire to have such cultural differences rooted in biology is mainly an attempt by the Reactionary Right to justify the course of history and reify the status quo. For example: why is Africa poor? It’s not because they have been—and continue to be—exploited by Western colonial powers, it’s because they are stupid. The flip side to that is, or course, that Europeans are naturally smarter and more pro-social, and this is baked into the genes, meaning that reform is unnecessary and impossible—it’s just “the way things are.” The Just World philosophy on the level of nations. It also rationalizes why immigration—no matter how limited—is bad, without admitting to pure racism. Rather, it’s “just science,” claim the HBD crowd, that Europeans are different and superior at the genetic level, and therefore must remain “pure and undiluted” in order to maintain Western civilization.

Suddenly, Conservatives Can’t Get Enough of Science (Arc magazine)

But I doubt that there is any genetic basis here. Yes, institutional and cultural beliefs are very persistent, and these are indeed barriers to “Westernizing” the rest of the world. But to put all of this down to genes without evidence—where is the gene for “clannishness?”—is just not scientific, it’s political: exactly what they accuse the “radical Left” of engaging in.

Finally, I’ll just note that the places where kinship groups were broken up the earliest seem to have the highest rates of depression, suicide, and mental illness to this day, while those parts of the world that retained embedded human relationships—although significantly poorer—seem to be far happier and more content with life. It forces one to contemplate what the ultimate purpose of “progress” really is.

[1] Jonathan F. Schultz, Why Europe? The Church, Kin-Networks and Institutional Development (PDF), p. 5

[2] van Leeuwen, et. al., Marriage Choices and Class Boundaries: Social Endogamy in History, p. 9

[3] Jonathan F. Schultz, Why Europe? The Church, Kin-Networks and Institutional Development (PDF), pp. 2-3

[4] Francis Fukuyama, The Origins of Political Order, pp. 236-239

[5] Alan Macfarlane, The Origins of English Individualism: Some Surprises, p. 269

[6] Review Of “The Origins Of English Individualism: The Family, Review Of “The Origins Of English Individualism: The Family, Property And Social Transition” By A. Macfarlane Property And Social Transition” By A. Macfarlane (PDF)

[7] Alan Macfarlane, The Origins of English Individualism: Some Surprises, pp. 270-271

[8] Quoted in Fukuyama, p. 236

[9] For example, Primitive Property, Chapter XV, p. 212:

Emile Souvestre, in his work on Finisterre, mentions the existence of agrarian communities in Brittany. He says it is not uncommon to find farms there, cultivated by several families associated together. He states that they live peacefully and prosperously, though there is no written agreement to define the shares and rights of associates. According to the account of the Abbé Delalandre, in the small islands of Hœdic and Houat, situated not far from Belle Isle, the inhabitants live in community. The soil is not divided into separate properties. All labour for the general interest, and live on the fruits of their collective industry. The curé is the head of the community; but in the case of important resolutions, he is assisted by a council composed of the twelve most respected of the older inhabitants. This system, if correctly described, presents one of the most archaic forms of primitive community.

[10] George C. Comninel, English Feudalism and the Development of Capitalism (PDF), pp. 4-5

Religion and the Birth of Liberalism

I want to talk about this article that I found a while back on Cato Unbound called The Trouble in Getting to Denmark. Denmark is the example given by Francis Fukuyama as the ideal modern, peaceful Western Liberal democratic state. Inconveniently for the Cato Institute, it also has one of the most generous social safety nets in the world.

[Tangentially: Cato is all about promoting economic “freedom,” and Denmark is one of the freest and most entrepreneurial societies in the world. But it’s that way precisely because of its strong safety net and social democratic policies—policies that are being promoted by people like Bernie Sanders in the U.S. Also, see this: Never Trust the Cato Institute (Current Affairs)]

This post content centers around a new history book by Mark Koyama and Noel Johnson called Persecution and Toleration: The Long Road to Religious Freedom. The authors are both professors at George Mason University and are affiliated with the Mercatus Center, which on first blush might make them a little suspect. But there are some very good historical insights here, which are well worth a look. I’ll also quote extensively from this interview with Koyama by Patrick Wyman on the Tides of History podcast which covers the subject matter well. I’ve lightly altered some of the dialogue for clarity, quotes are from Koyama unless noted otherwise..

The book’s insights dovetail with what we’ve been talking about recently: the rise of the modern, liberal absolutist state. The thesis is that religious freedoms were basically the foundation for the rise of capital-L Liberalism—Liberalism being the idea of society as an assorted collection of solitary, self-directed individuals who must be free from any sort of predetermined social identity. Because this notion of has become the hegemonic assumption of the modern world, we fail to recognize just how novel it really is. So let’s dive in…

The main thesis is succinctly stated by Patrick Wyman near the beginning of the podcast:

“The rise of modern states, which were capable of enforcing general rules throughout their territory–down to the local level–were the precondition for religious peace and the eventual rise of religious and other freedoms, which we can term more broadly Liberal freedoms.”

Medieval European society gets the closest look, because it is out of these societies that the modern Liberal state develops, but many of the concepts and insights are applicable to other societies as well.

Religious Freedom versus religious tolerance

The book makes a very important point: religious freedom and religious tolerance are not the same thing; they are actually quite different. Most modern nation-states have true religious freedom, and most are founded on a secular basis (to the consternation of religious fundamentalists). Ancient states, however, practiced a form of religious tolerance, which was the toleration of minority religious beliefs, the same way you might tolerate your neighbor’s loud music instead of going over and starting a fight, or tolerate a screaming baby on a flight:

[3:18] Mark Koyama: “We attempt to project backwards our modern notions of what religious freedom is. In our modern language, we often use toleration interchangeably with religious freedom, where we describe toleration as an attitudinal thing–like ‘I’m a tolerant person; I don’t care what religion you have,’ as opposed to its original meaning, which was ‘to bear.’ This was a sufferance. We’re going to allow these Muslims, say, to practice their religion, but it’s not because we’re okay with it. It’s because it’s the best expedient or pragmatic response to religious diversity.”

[10:51] Patrick Wyman (host): “There’s a fundamental difference between religious sufferance and freedom. Between suffering something to happen because it’s necessary for you to run your state the way you want to, and actively embracing this thing as a legally-based ideal.”

I think that’s an important point. Ancient multi-ethnic states did not have true religious freedom. You will often find this asserted in various history books, but this is a misunderstanding. They had religious tolerance; that is, they permitted subcommunities to openly practice their religion. It was a sufferance, but they allowed it because it was better than the alternative.

This was a categorically different concept from religious freedom as we think about it today.

One example is the Roman Empire. All the Romans really wanted was to gain the spoils of their vast empire via tax collection and tribute. They often co-opted local rulers and other notables, who subsequently became “Romanized,” but they weren’t out to transform society. To that end, subjugated ethnic groups were allowed to maintain their cultural and religious practices, with a few stipulations. For most religions, this wasn’t a problem—they were flexible enough that they could accommodate some Roman gods in their practices and be more-or-less okay with it. The Jews, on the other hand, with their strident and uncompromising monotheism, were different. They regarded their God as the real one, and all others as idols, and worshiping idols was strictly forbidden. This is why there was so much tension in Judea, tension that ultimately led to several revolts and wars.

This was a time where religious identity was not separate from cultural or ethnic identity. The rise of doctrinal evangelical religions changed all that. You can be an Arab, a Turk, a Persian, or Balinese and also be a Muslim. You can be Irish, Polish, French, Italian, or Nigerian and be a Catholic. That’s a much more modern-day conception of religion—as a creed freely chosen. But in ancient societies, religion was an essential and inseparable part of shared cultural identity.

In our reading of the historical evidence, neither ancient Rome nor the Islamic or Mongol Empires had religious freedom. They often refrained from actively persecuting religious minorities, but they were also ruthless in suppressing dissent when it suited their political goals. Religious freedom is a uniquely liberal achievement, and liberalism is an achievement of post-1700 modernity. What explains it?

Which raises the second major point of the book.

Identity Rules versus General Rules

For me, the biggest takeaway was the difference between identity rules and general rules.

[6:25] “An identity rule is where the content or enforcement of the law depends on the social identity of the individual involved. In contrast, a general rule is a rule where the content or enforcement of the law is independent of that individual’s relevant social identity…The identity rules could privilege a minority, or it could disadvantage them. They key here is that your social identity is determinative.

They actually distinguish three different types of rules: personal rules, identity rules and general rules. Personal rules are targeted to the specific person who commuted the infraction, and are largely ad-hoc. This works well on the local level, where everybody knows everybody else such as a small self-governing village, but it doesn’t scale up.

When large empires came on the scene, they imposed identity rules, where law enforcement was based largely on one’s group identity. The reason they did this is because ancient states had limited capacity to govern at a local level i.e. low state capacity. The sophisticated legal systems we have today—with their courts, police, bailiffs, jails, attorneys and professional judges—simply didn’t exist. The capacity simply wasn’t there. Plus, the very notion of an individual as having an identity wholly separate and unmoored from the larger group to which he or she belonged was much less common in the ancient world than in our modern one. That is, ancient societies were collectivist by default. And so, rules were based on one’s ascribed group identity: one’s clan affiliation, social status, guild, corporate group, religion, etc.

With the shift to settled agriculture after 8,000 BC, political organizations became larger and states oversaw the introduction of more sophisticated legal systems to prevent theft, fraud, and uncontrolled violence. For most of history, and in much of the developing world today, these laws have taken the form of identity rules.

Identity rules depend on the social identity of the parties involved. This could refer to an individual’s clan, caste, class, religious affiliation, or ethnicity. Examples from historical legal systems abound. Aristocrats faced different rules from commoners. Slaves faced different rules from freemen. The Code of Hammurabi, for example, prescribed punishment based on the relative status of the perpetrator and the victim. Identity rules were common historically because governing individuals on the basis of their legible social characteristics was cheap. As religious identity was particularly salient, many identity rules treated individuals differently on the basis of their religion.

The Trouble in Getting to Denmark (Cato Unbound)

This is something I’ve repeatedly tried to emphasize in my writing: when we talk about “states,” or things like “the rise of the state” in ancient history, we’re talking about something qualitatively different than when we use term “state” today. That’s important to keep in mind.

[9:24] “The nature of pre-modern states is that, because of they way they govern, they have to rely on identity rules. They don’t have the ability or the capacity to govern at a very local level. They can’t extend their reach deep into society. So they’re more likely to say to this community: ‘we’re going to delegate to you a lot of authority; a lot of power.’ Even if they wanted to enforce a general rule, they wouldn’t be able to.”

“To take another pre-modern example, if you look at the Ottoman state throughout its history, it’s seen as an absolutist state where the Sultan has all the power. But it’s such a vast empire that, given how primitive communication technologies are, its inevitably decentralized, and power is delegated to local nobles. And that means that religious minorities like Christians and Jews get quite a lot of autonomy; a lot of independence, because the state just can’t govern them directly.”

“So the local religious leaders will get quite a lot of autonomy, and a lot of ‘freedom’ precisely because the state governs through identity rules, not through general rules. This results in a lot of self governance for religious minorities. But the key point is that that religious self-governance should not be mistaken for religious freedom. Nor should a state like the Ottoman state, which delegated power and gives autonomy to religious communities, be mistaken for a liberal state. That shouldn’t be mistaken for an example of religious freedom or liberalism.”

Religious Legitimization

The rulers of ancient states relied primarily on religion to legitimize their rule. This seems to stem back very far, indeed. A careful reading of, for example, The Creation of Inequality by Flannery and Marcus, leads to the conclusion that all of the earliest ruling classes everywhere claimed some sort of special connection to the divine entities that were the object of collective reverence. Sometimes this was the “King as a god” model of ancient Egypt. Sometimes this was the “Ruler as steward” model, as in ancient Mesopotamia. Sometimes it was “sacral kingship,” with the ruler as high priest. Sometimes it was tribal elders or scribes who “interpreted God’s will”. Much later, it was the “Divine Right of Kings.” But religion seems to have played a role in virtually all cases that we know of.

If identity rules were a “cheap” form of enumerating and enforcing laws in low-tech, multi-ethnic societies, then appealing to religion was a “cheap” way for rulers to claim legitimacy in these types of societies. It was also crucial to the creation of coherent group identities, which were necessary for identity rules to function. Often it involved special treatment for clergymen, or some sort of power-sharing accommodation with religious officials. But that also led to fairly weak states, with little power to expand the rulers’ prerogatives.

Religion was so central to premodern societies that it is difficult to fully understand the transformations associated with modernity without attending to it. Religion was used to justify the categories in which government and society more broadly used to structure everyday life. Women versus men, nobles versus commoners, guild members versus non-guild members, Muslims versus Christians, Christians versus Jews. All of these categories—as well as the different statuses associated with them in law and in culture—relied to a varying degree on religion to legitimize their use.

Religion was an especially important component of identity in the large agrarian civilizations of Europe and the Near East in a time before nationalism and nation states. Shared religious beliefs and religious identities were seen as crucial to maintaining social order. Religious differences were extremely destabilizing because they were associated with a host of deep societal cleavages.

In an environment where a common religious identity undergirded not only the institutions of the church, but also those of the state and civil society, both religious freedom specifically, and liberalism more generally, were unthinkable.

For instance, in medieval and early modern Europe oaths sworn before God played an important role in upholding the social order. These were thought so important that atheists were seen as outside the political community, since as John Locke put it, “promises, covenants, and oaths, which are the bonds of human society, can have no hold upon an atheist.”

A shared religious identity was also crucial for guild membership. Guilds in Christian Spain excluded Muslims. Guilds in 14th century Tallinn excluded Orthodox Christians. Jews were excluded almost everywhere. In parts of Europe converts from Judaism and even their descendants or remote relations could not be guild members. In a world governed by identity rules, an individual’s religious identity determined what economic activities were open to them.

The Trouble in Getting to Denmark (Cato Unbound)

Identity rules were even relied upon by rulers to raise revenue. For example, in many ancient empires, taxes were collected at the village level, with the collection delegated to local elders. Taxes might be assessed differently depending on the group in question. Merchants might be taxed differently than farmers, for example, and often times nobles weren’t taxed at all! Different ethnic groups might face different levels of responsibility and taxation. For example, Jews were the only group allowed to lend money at interest in Catholic Europe, so they were frequently used as cash cows by Christian rulers:

As an illustration, consider how early modern governments often used Jewish communities as a source of tax revenue. Usury restrictions made lending by Christians very costly. However, rulers could grant monopoly rights to Jews to lend without violating their religious principles. In turn, the rates of interest charged by Jewish lenders were high, and the profits were taxed away by the very rulers who granted these rights. Finally, the specialization of Jews as moneylenders exacerbated preexisting antisemitism among the Christian population. This in turn made it relatively easy for rulers to threaten Jews in case they didn’t intend to pay up.

So long as rulers relied on Jewish moneylending as a source of revenue, Jews were trapped in this vulnerable situation. Their position could improve only when states developed more sophisticated systems of taxation and credit.

As suggested by the above example, low state capacity and a reliance on identity rules are self-reinforcing. States that rely on identity rules face less incentive to invest in the fiscal and legal institutions that would increase state capacity. This, in turn, makes them more reliant on identity rules and less able to enforce general rules.

Social Equilibrium

Low state capacity, identity rules and religious legitimization all combined and interacted with each other to form a self-reinforcing social equilibrium, argue Koyama and Johnson.

What is a self-reinforcing equilibrium? This is a tricky one. It’s a concept developed by a Stanford political scientist named Avner Greif. He distinguishes between “institutions as rules” and “institutions as equilibria”. The following is my interpretation, such as I can make out:

Rules as institutions is just what it says—it looks at what the rules of the game are, and how they developed over time. Rules are prescriptive, and are set and enforced from above. They change very slowly.

Rules as equilibria is a concept developed from game theory. In this conception, rules are an emergent phenomena from consistent, repeated interactions between groups of people. There is no overall enforcer, rather the rules develop through “playing the game” over and over again. Consequently, rules as equilibria are more likely to develop out of repeated voluntary interactions between groups rather than individuals, and are enforced by intra-group norms rather than an all-powerful “referee” overseeing everything. The rules of the game are not static; they develop as time goes on. This approach emphasizes the incentives and motivations of the groups which are interacting.

In the institutions-as-rules approach, rules are institutions and institutions are rules. Rules prescribe behavior. In the institutions-as-equilibria approach, the role of “rules”, like that of other social constructs, is to coordinate behavior. The core idea in the institutions-as-equilibria approach is that it is ultimately the behavior and the expected behavior of others rather than prescriptive rules of behavior that induce people to behave (or not to behave) in a particular way. The aggregated expected behavior of all the individuals in society, which is beyond any one individual’s control, constitutes and creates a structure that influences each individual’s behavior. A social situation is ‘institutionalized’ when this structure motivates each individual to follow a regularity of behavior in that social situation and to act in a manner contributing to the perpetuation of that structure.

Institutions: Rules or Equilibria? (PDF)

An example he gives is the merchant guilds of the Middle Ages:

For example, at the medieval Champagne Fairs, large numbers of merchants from all over Europe congregated to trade. Merchants from different localities entered into contracts, including contracts for future delivery, that required enforcement over time. There was no state to enforce these contracts, and the large number of merchants as well as their geographic dispersion made an informal reputation mechanism infeasible…impersonal exchange was supported by a “community responsibility system”. Traders were not atomized individuals, but belonged to pre-existing communities with distinct identities and strong internal governance mechanisms.

Although particular traders from each community may have dealt with merchants from another community only infrequently, each community contained many merchants, so there was an ongoing trading relationship between the communities, taken as a whole. Merchants from different communities were able to trust each other, even in one-shot transactions, by leveraging the inter-community “trust” which sustained these interactions. If a member of one community cheated someone from another community, the community as a whole was punished for the transgression, and the community could then use its own internal enforcement institutions to punish the individual who had cheated.

This system was self-enforcing. Traders had an incentive to learn about the community identities of their trading partners, and to establish their own identities so that they could be trusted. The communities had an incentive to protect the rights of foreign traders, and to punish their members for cheating outsiders, so as to safeguard the valuable inter-community trade. Communities also developed formal institutions to supplement the informal reputation mechanism and coordinate expectations. For example, each community established organizations that enabled members of other communities to verify the identity of its members.

Ultimately, the growth of trade that this institution enabled created the impetus for its eventual replacement by more formal public-order (state-based) institutions which could directly punish traders by, for example, jailing them or seizing their property.

Thus, we see the importance of group identity and solidarity in establishing and enforcing social norms in a world where centralized institutions (e.g. states) are very weak. Without a powerful state, there is simply no way to enforce norms out of a group of isolated, atomized individuals whose identity is completely self-chosen. But membership in various sodalities makes it possible. If you were a bad merchant who cheated or welshed on your debts, you wouldn’t be a merchant very long, even without an all-powerful state enforcing contracts from above. Your reputation, and your relationship with the group, was paramount.

The authors also make a distinction made between equilibria which are stable, and equibria which are self-undermining.

[11:06] PW: “You talk a lot about political legitimacy, about what allows rulers to rule without the constant threat of political violence, of coercive violence. And so you get at the concept of self-reinforcing equilibrium—that this is how medieval society functioned. In your conception, you have religious legitimacy—legitimacy given to a ruler by religious authorities—and identity rules, working together to generate a kind of political equilibrium.

MK: “In the Middle Ages we see widespread reliance on identity rules. Why? Well, for one reason is that even if a ruler was ambitious and had read Roman law and envisioned ruling on the basis of laws which were more general, less parochial, and less local, they wouldn’t have the ability to really enforce them. Ambitious medieval rulers lacked bureaucracies and standing armies, so they would be unable to overturn these rules and replace them with more general rules. So that’s one self-reinforcing relationship—the relationship between low state capacity and reliance on identity rules.

“The other aspect is the reliance on religion as a source of legitimacy. One reason why religion is valuable is because medieval rulers didn’t provide much in the way of public goods, beyond maybe defense; but even defense is questionable because often defense is actually offense. So they’re not providing education, they’re not providing welfare—that’s done by the Church. They’re not really regulating markets. They’re not doing much to alleviate famine or harvest failures. Where does their legitimacy come from, then?

“It’s because they’re the ‘Most Christian King,’ or the’ Catholic Monarch,’ or the ‘Defender of the Faith.’ Religion is a cheap way for rulers to get legitimacy. But if you’re using religion to get legitimacy, you’re making a deal with the religious authorities.

“So in the case of medieval Europe, you’re making a deal with the Church. What the deal entails might be things like: making Churchmen exempt from certain laws, or exempt from paying taxes, which was common in the medieval period. It might involve allowing the papacy to choose popes, or giving them political offices.”

“If you have low state capacity, religious legitimization is going to be an appealing strategy. But at the same time, the more you rely on religion or religious authorities to legitimate your rule, that’s going to curtail your power, your discretionary authority to build state capacity. So its’ a self-reinforcing relationship.

And so low state capacity, religious legitimization, and the application of identity rules, were all linked together in maintaining a stable equilibrium. Eventually, though, that equilibrium was disrupted.

Disrupting the equilibrium: The Reformation and the printing press

The Gutenberg printing press, expanding literacy, and the Protestant Reformation were all intimately connected, and provide a potent example of how technological change often drives social change, for better or for worse (a point worth attending to today).

Suddenly you have many more religious minorities, disrupting the old stable equilibrium. Perhaps even more significantly, you have religious minorities that are allied across national boundaries. This is something that did not really exist before.

[23:00] “John Calvin and Martin Luther didn’t want to secularize society or the state—anything but. They wanted to revitalize religion on different foundations. But the net result was something very different than what they intended…”

Large chunks of society that were once the concern of the Church are no longer the concern of the Church, at least in the Protestant territories. For example, in England the monasteries are sold off, and a lot of Church land is privatized, so a lot of functions that the Church was doing—like providing welfare to the poor–are no longer being provided in sixteenth-century England. That generates a crisis of beggars and paupers in Elizabethan England which the state eventually has to solve with the introduction of the Poor Law in the early seventeenth century.”

“In the German territories, it’s been shown by research that Protestantism leads to the selling off of Church buildings. Even in Catholic Europe, the Counter-Reformation is tightly controlled by powerful monarchies in Spain and France. And so the independent ability of the Church is weakened as a result. Similarly, the ability of identity rules and religious identity to effectively govern society is weakened where you have multiple religions in one society.

“So all of these societies which experience the Reformation wholeheartedly—France, the German territories, England—they generate religious minorities that they didn’t have before.”

“This is an ongoing problem. In England, the wars of religion destabilize the political economy for the entire period between Henry the Eighth and the Glorious Revolution. You’re always worrying whether the Catholics will somehow take control, or will turn England toward Rome. That generates the persecution of Catholics, and it generates conflict between Parliament and the King.”

“Germany is the most extreme example, because the Holy Roman Empire descends into a terrible war—the Thirty Years’ War—which is one of the worst wars in European history.”

Throughout this period of crisis, which lasts more than a century, European rulers want to return their societies to how they had been in the medieval period. They want to regain religious homogeneity, so they think they can reconcile the Protestants and the Catholics. It’s a common view in sixteenth-century France that if the king can bring everyone together, there will be a way to bring the Protestants back into the fold. We also have the policy of expulsion which is used not only in Spain and Portugal, but also in France at the end of the seventeenth century. You feel you can’t govern effectively so long as you have a group of people who belong to another religion, so you expel them.”

Because rulers are conditioned on this prior equilibrium, they don’t know how to deal with religious differences. And it takes basically a century-and-a-half of conflict, violence, and then accommodation before there’s a movement to reorient these societies along different rules. There’s what we recognize as a shift in political arrangements which de-emphasizes religion as a source of political legitimacy and shifts away from this reliance on identity rules towards more general rules. And, of course, this transition takes several centuries.”

They then discuss a concept called multivocal signaling. In an era of low information flow and primitive communication technologies, rulers could target alternative messages to different groups of subjects. Each message was tailored to that particular social group, and was designed to appease them and keep them in the fold. The rulers’ identity became a Rorschach ink blot designed to be interpreted many different ways by many different groups of people.

But once information became easier to disseminate and access, different groups could compare notes. Now it was no longer possible to be all things to all people; sort of like when a cheating man’s wife discovers that he has one or more secret other families. This concept is based on a book called The Struggle for Power in Early Modern Europe by political scientist Daniel H. Nexon:

[27:15] PW: “In the early modern period, especially with the rise of print and then the Reformation that follows, it gets a lot harder for rulers to be everything to all of their different groups of subjects–what Nexon terms multivocal signalling. Premodern rulers had done a lot of being one thing to one group of people in their kingdom, another thing to another group of people. So you could simultaneously be ‘Protector of the Jews’ and ‘Most Christian King,’ and this to the artisans, and this to the nobles. A ruler could be a lot of different things simultaneously because it was easy to target messages to those groups in the absence of mass media of communication.”

But when you get the rise of print and simultaneously the splintering of society along religious lines, it gets a lot harder to be everything to everybody, because everybody knows what you’re saying to everybody else, too. So it becomes much harder for rulers to maintain these split identities that allow them to govern heterogeneous societies effectively by means of these identity rules.”

“Maybe that’s a thing that helps explain the shift to general rules. When you can’t be everything to everybody, you need to find different bases of legitimacy and power on which to rule.

[28:33] MK: “…When we think about why religious persecution was so acute during that period—why do you have these wars of religion—the kind of trite, high school history view is how intolerant people were back then. Then we can look down on them from our modern liberal societies and say that people in the sixteenth century really believed in burning heretics alive, or killing people for religious differences.

But Daniel Nexon’s book really points out that because of the spread of print media, this religious crisis was really a geopolitical crisis, because Catholics in France and Spain were now interested in the fate of Catholics in England. So the Catholics in England then become a potential fifth column in the geopolitical struggle taking place for non-religious political reasons between England, France, and Spain. They’re aligned with the political interests of a foreign power. Ditto Protestants in France. Protestants in France are going to be aligned with the Dutch Republic, or with the German States or with England. So, again, a potential fifth column that the state no longer can trust.

Prior to the Reformation, there were religious differences across these European states. People would have their own local version of Catholicism. They would worship local saints and have local practices. But those local religious differences were not correlated in any way with political differences at the geopolitical level. The fact that you might have your own religious practices in Norfolk was not going to align you with the French. But by the seventeenth century, that is true for Catholics and Protestant minorities in their respective countries. So that’s another layer of this crisis that early modern rulers faced.

Nexon himself describes multivocal signaling this way:

Multivocal signaling enables central authorities to engage in divide-and-rule tactics without permanently alienating other political sites and thus eroding the continued viability or such strategies. To the “extent that local social relations and the demands of standardizing authorities contradict each other, polyvalent [or multivocal] performance becomes a valuable means of mediating between them” since actions can be “coded differently within the audiences.” Multivocal signaling, therefore, can allow central rulers to derive the divide-and-rule benefits of star-shaped political systems while avoiding the costs stemming from endemic cross pressures… The spread of reformation, in particular, made it difficult for dynasts to engage in polyvalent signaling across religiously differentiated audiences…

The Struggle for Power in Early Modern Europe: Religious Conflict, Dynastic Empires, and International Change; by Daniel H. Nexon, pp. 114-115

This also helps explain the emergence of nationalism and national identities in nineteenth-century Europe, and the demise of multi-ethnic states like the Austro-Hungarian empire. As the hand of the state reached ever deeper down into the underlying fabric of society during this period, people wanted to be directly ruled by people “like them” and not by “outsiders.” Ancient states, by contrast, did fairly little besides collecting taxes, guaranteeing safe travel, and keeping basic order, with underlying ethnic identities remaining mostly intact.

The Roman Empire, again, provides an example. You can’t look at a map of the Roman Empire at its height without pondering, “how could they govern such a vast territory without any modern technology?” The answer is: they didn’t! The empire was sort of a “stratum” above local communities whose day-to-day lives probably differed very little from those of their remote ancestors. The empire just provided an organizational framework, and little else. Even a standing army could only move as fast as a soldier could march, and communicate as fast as a horseman could ride. Rulers moved the army about strategically, like pieces on chessboard, in order to maintain order and quell revolt. Actual interaction with government officials, however, was limited to a small coterie of aristocratic local leaders. For most ordinary people in the ancient world, the “empire” they were nominally ruled by was just a remote abstraction. With the rise of strong, centralized states, that was no longer the case. Even today separatist conflicts abound, such as in Catalonia or Kurdistan.

The Emergence of general rules and modern Liberalism

And so we finally come to the introduction of general rules—rules that are written to treat everybody equally, regardless of their group identity, doctrinal creed, or any other ascribed social status. Whether you were Protestant, Catholic or Jew (or even atheist!), the law was the same. Of course, this was an ideal often not lived up to, but it started to become the common expectation. This eventually came about after every other approach was tried by Early Modern rulers and failed. It’s hard to win a win a war against a belief system. But what this approach also did was free up Early Modern rulers to expand state capacity in other ways that they could not have done before, and appeasing religious officials was no longer paramount. For example, Napoleon considered his law code to be his finest and most durable achievement, surpassing even his military victories. All sort of archaic and feudal rules were swept away.

Yet there were often many attempts from below to push back against this kind of governance, and hence there were significant roadblocks on the way to more modern systems of professional, bureaucratic governance, democracy, and the expansion of state capacity:

[31:15] We see endless attempts by Early Modern rulers to build state capacity, and they’re always being undermined at the local level…Every attempt by these Early Modern rulers to build state capacity is one step forwards, two steps backwards. There are these forces pushing back against any attempt to build a society based on general rules—what Francis Fukuyama calls the repatrimonilization of the state—and often it’s only in war that these modern states are forged. War is driving this increase in state capacity, but war is also destroying the economy and using up the lives of hundreds of thousands of individuals. That’s why its such an arduous process.

Some of these Early Modern rulers are heading towards more general rules and increased state capacity, others think the way forwards is actually backwards. The term historians use is confessionalization, and in some sense these confessional states that are built in the Early Modern period are trying to rebuild the medieval equilibrium. I think Louis the Fourteenth, what he’s doing when he expels the Huguenots—the French Protestants—is looking back to the golden age of how France was before the Reformation. He thinks if only he could get back and reunify the country religiously, that would actually strengthen his power and make the state stronger.

We know after the event that that’s a failure. It doesn’t strengthen the French economy or society, because they lose a very productive minority, but it also doesn’t work even on its own terms, because by the eighteenth century there are still many, many Protestants in France. It doesn’t get rid of the problem of a religious minority.

European rulers eventually had no other choice but to acquiesce to the freedom of religion as we now know it. Edicts of Toleration were signed all over Europe. The Founding Fathers of the United States—for whom the wars of religion were still recent history—recognized this and enshrined it in the Constitution. Its birth was much more painful in Europe, beginning with the often radical atheism of the leaders of the French Revolution. This kicked off the Long nineteenth century—the period of conflict where modern Liberalism was born.

With religious affiliation now being something “freely chosen” according to one’s own individual conscience, other forms of ascribed identity soon fell by the wayside. Free cities and communes had always been places for nonconformists in Medieval Europe to flee to in order to escape the stultifying conformity of the countryside and shed their traditional social obligations. These sophisticated, cosmopolitan urbanites—the bourgeoisie—became the nucleus of the new social order based around “freely chosen” social affiliations, flexible and ever-shifting personal identities, and explicit (as opposed to implicit) contractual obligations:

In our argument it was not that the Wars of Religion simply exhausted confessional and doctrinal disputes. Rather there was a transformation at the institutional level. The leading European states shifted away from identity rules towards more general rules. This shift was related to 19th-century historian Henry Sumner Maine’s discussion of the passage from status to contract: Status was imposed and ascriptive. Contracts, in contrast, are the outcome of voluntary choices. Status-based rules are invariably identity rules. Contracts provide the foundation for a system of general rules.

Moving from a fixed status to a contractual society helped set in motion a range of developments, including the growth of markets and a more extensive division of labor. But it had the unintended consequence of diminishing the political importance of religion, and this made liberalism feasible for the first time in history.

The Trouble in Getting to Denmark (Cato Unbound)

Wars played a major role in the emergence of modern states, particularly the need to raise ever-larger amounts of money to fund them. In our history of money, we saw how international merchants’ use of paper instruments of credit, such as bills of exchange, existed alongside the ruler’s legal authority to raise taxes and coin money. Bills of exchange and trade credit allowed these merchants to coordinate their activities across international boundaries. This was enforced not by the state, but by private networks of merchant-bankers (i.e. via rules of equilibrium). When the bankers’ ability to issue paper credit became conjoined with the state’s ability to levy taxes with establishment of the Bank of England, you had a major step forward toward the creation of the modern welfare-warfare state. The end of the Thirty Year’s War in the Peace of Westphalia led to the concept of what political historians refer to as Westphalian sovereignty—the basis of the soveriegn, absolutist nation-state. These developments, in turn, led to the establishment of a professional Weberian civil service, supplanting the patrimonial states governed by hereditary aristocrats, i.e “depatrimonialization”. Per Wikipedia:

[Max] Weber listed several preconditions for the emergence of bureaucracy, including an increase in the amount of space and population being administered, an increase in the complexity of the administrative tasks being carried out, and the existence of a monetary economy requiring a more efficient administrative system. Development of communication and transportation technologies make more efficient administration possible, and democratization and rationalization of culture results in demands for equal treatment.

As Karl Polanyi extensively documented, strong states, capable of enforcing general rules and contracts, and haute finance, were the key requirements in creation of Market Society. Market Society—where everything including land and labor was for sale and theoretically allocated according to impersonal forces of supply and demand—was not merely an expansion of the kind of activities that had gone on generations prior. Rather, it was something altogether new and radically different, and done with the full blessing of the elite ruling classes. Patrick Deneen notes the connection in his book, Why Liberalism Failed:

Individualism and statism advance together, always mutually supportive, and always at the expense of lived and vital relations that stand in contrast to both the starkness of the autonomous individual and the abstraction of our membership in the state. In distinct but related ways, the right and left cooperate in the expansion of both statism and individualism, although from different perspectives, using different means, and claiming different agendas. This deeper cooperation helps to explain how it has happened that contemporary liberal states–whether in Europe or America–have become simultaneously more statist, with ever more powers and authority vested in central authority, and more individualistic, with people becoming less associated and involved with such mediating institutions as voluntary associations, political parties, churches, communities, and even family. For both “liberals” and “conservatives,” the state becomes the main driver of individualism, while individualism becomes the main source of expanding power and authority of the state. p. 46

Our main political choices come down to which depersonalized mechanism will purportedly advance our freedom and security–the space of the market, which collects our billions upon billions of choices to provide for our wants and needs without demanding from us any specific thought or intention about the wants and needs of others, or the liberal state, which establishes depersonalized procedures and mechanisms for the wants and needs of others that remain insufficiently addressed by the market.

Thus the insistent demand that we choose between protection of individual liberty and expansion of state activity masks the true relation between the state and market: that they grow constantly and necessarily together. Statism enables individualism, individualism demands statism. For all the claims about electoral transformations–for “Hope and Change,” or “Making America Great Again”–two facts are naggingly apparent: modern liberalism proceeds by making us both more individualist and more statist. This is not because one party advances individualism without cutting back on statism while the other does the opposite; rather, both move simultaneously in tune with our deepest philosophic premises. p. 17

The authors display their Libertarian biases toward the end of the article with this line: “While the far left has never accepted liberal values such as freedom of expression and freedom of religion, antipathy towards liberal values is now evident in mainstream progressive publications as well. Liberalism is indicted because it is perceived as legitimating inequality and failing to endorse social justice.” Notice the lack of citations here.

A nice strawman, but liberalism is not indicted, capitalism is. Capitalism is inherently undemocratic, since it invests disproportionate power in an unelected minority capitalist class, whose power stems from paper ownership claims (in deeds, stocks bonds, and accounts) which can be passed down in perpetuity. As Deneen notes, in practice this simply replaces one aristocracy with another. And we all know that the rich can buy special treatment under the law due to their disproportionate wealth and influence in comparison with the rest of us, something which makes a mockery of so-called “liberal values.”

Also, under Neoliberalism repatrimonialization and rent-seeking have exploded. Monopolies and oligopolies control practically every major industry. The feckless rich are bailed out while ordinary citizens are left to their own devices. Prices have less to do with actual production costs than sheer market power, and rules are written and re-written by the industries themselves in order to privilege existing actors and keep out competitors (including governments themselves). Parasitic financial gambling has become the highest-return activity rather than providing useful goods and services. Incompetent cronies and family members take over key positions in the public and private sector. The upper class uses elite universities as a moat to maintain their elevated status, despite their demonstrated lack of judgement or competence.

Capitalism as it currently stands also commonly makes rules that favor certain groups over others. Professional classes like doctors, lawyers, engineers, and so forth, are shielded from international competition by government restrictions. Patent and copyright laws enforced by strong states prevent the copying of innovations by others, and preserve existing wealth distribution. Wealth is taxed more lightly than wages. Meanwhile, most average workers are left to “sink or swim” in a harsh, competitive globalized job market with no protections whatsoever. This is all rationalized as an “inevitable” force of nature. Dean Baker has written a whole book about it called Rigged:

Rigged: How globalization and the rules of the modern economy were structured to make the rich richer (Real World Economics Review)

In the end, the authors conclude, “[W]e think the core characteristics of a liberal society are the rule of law and reliance on general rules,” and, “Liberalism is valuable because it is the only form of social order we know of that is consistent with a high degree of autonomy and human dignity.”

Well, under that definition, socialism would fit the bill just as well, if not better. It’s hard to see a lot of “dignity” and “autonomy” with the amount of people struggling in modern-day America. It’s hard to equate the millions of prisoners in jail toiling away for pennies an hour with “dignity.” And it hard to have “autonomy” when the base condition of existence for most of us is having to constantly sell our labor or face utter ruin. Liberalism is—or should be—more than simply allowing the rich the “freedom” to make whatever rules they wish for their own benefit, to the detriment of society as a whole. If that doesn’t happen soon, then don’t expect Liberalism to last much longer.

Don’t Think Like an Economist

Here’s Tyler Cowen over at Marginal Revolution:

Larry Summers is my favorite liberal economist because even while maintaining his liberal values he never stops thinking like an economist. That makes him suspect among the left but it means that he is always worth listening to….

Summers on the Wealth Tax (Marginal Revolution)

No, that’s precisely what makes him NOT worth listening to (he’s—surprise, surprise!—opposed to the tax). Listening to arrogant Ivy League hyper-elite technocrats like Larry Summers is exactly why the Democratic Party is in the pathetic state its is in, and continually loses elections, even to incompetent morons like Donald Trump. If Larry Summers is a representation of “liberal values” than God help us all.

Summers was Obama’s economic advisor, the same Obama who refused to jail a single banker or financier for their role in the housing collapse, no matter how blatant their malfeasance. But the most telling example about how Larry Summers “never stops thinking like an economist” (a good thing in Cowen’s estimation) is the infamous Summers memorandum, where he argued that—according to economic logic—Africa was tragically underpolluted, and that needed to be rectified.

Summers, an enthusiast for the [World] Bank’s policy of encouraging poor countries to open their borders to trade, went on to explain why he thought that it was legitimate to encourage polluting industries to move to poor countries. ‘The measurement of the cost of health-impairing pollution depends on the forgone earnings from increased morbidity and mortality,’ he wrote. So dangerous pollution should be concentrated ‘in the country with the lowest wages’.

He added: ‘I think the economic logic behind dumping a load of toxic waste in the lowest wage country is impeccable and we should face up to that.’

He also introduced the novel notion of the ‘under-polluted’ country. These included the ‘underpopulated countries in Africa’ where ‘their air quality is probably vastly inefficiently low compared to Los Angeles’. His point was that since clean air, which he calls ‘pretty air’, is valuable as a place to dump air pollution, it is a pity poor countries can’t sell their clean air for this purpose. If it were physically possible there would be a large ‘welfare-enhancing trade in air pollution. . .’ he says.

Summers admits in his much-faxed memo that there might be objections to his case, on moral grounds for instance. But he concludes by saying that ‘the problem with these arguments’ is that they ‘could be turned around and used more or less effectively against every Bank proposal for liberalisation’.

‘What he is saying,’ comments British environmentalist Nicholas Hildyard, ‘is that this argument represents the logical conclusion of encouraging free trade round the world.’

Why it’s cheaper to poison the poor (New Scientist)

He never stops thinking like an economist!!! Um, yay?

The sociopathic logic above is the “logical” outcome of doing a cost-benefit analysis involving “tradeoffs” – the stock in trade of economics as a governing philosophy, which we’ll look at more closely in a bit.

Here are some more of Larry Summers’s greatest hits:

Fresh off his success in Lithuania, Summers moved to the World Bank, where he was named the chief economist in 1991, the year he issued his famous let’s-pollute-Africa memo. It was also the year that Summers, and his Harvard protégé Andrei Schleifer (who worked with Summers on the Lithuania economic transformation), began their catastrophic “rescue” of Russia’s crisis-ridden economy. It’s a complicated story involving corruption, cronyism and economic devastation. But by the end of the 1990s, Russia’s GDP had collapsed by more than 60 percent, its population was suffering the worst death-to-birth ratio of any industrialized nation in the twentieth century, and the financial markets that Summers and Schleifer helped create had collapsed in what was then the world’s biggest debt default ever. The result was the rise of Vladmir Putin and a national aversion to free markets and anything associated with Western liberalism.

The Summers Conumdrum (The Nation)

Behold, the results of “liberal” economists. My core point is this: this kind of autistic “economic thinking” is the very reason why the voting public believes there is no substantial difference between the Republicans and the (Neoliberal) Democrats. And they’re right! It’s also worth noting that Professor Cowen has let the cat out of the bag, tacitly admitting that the very discipline of economics is inherently right-wing (it makes him suspect among the left…). Yet it still masquerades as ideologically neutral!

Which brings me to a topic I’ve wanted to mention. A new book by New York Times economic columnist Binyamin Applebaum explains how this kind of “economic thinking” has come to dominate the actions of the world’s governments in place of all other social factors. But it wasn’t always so. Quite the opposite! In fact, economics…

…was not always the imperial discipline. Roosevelt was delighted to consult lawyers such as [Adolf] Berle, but he dismissed John Maynard Keynes as an impractical “mathematician.” Regulatory agencies were headed by lawyers, and courts dismissed economic evidence as irrelevant. In 1963, President John F. Kennedy’s Treasury secretary made a point of excluding academic economists from a review of the international monetary order, deeming their advice useless. William McChesney Martin, who presided over the Federal Reserve in the 1950s and ’60s, confined economists to the basement…In the 1950s, a Columbia economist complained he made as much as a skilled carpenter.

How Economists’ Faith in Markets Broke America (The Atlantic)

But it was not to last. Applebaum’s book details how economists became the de-facto technocratic rulers of society, supplanting all other notions of good and effective governance. The story begins, ironically, with Roosevelt’s New Deal, which…

…created a new need for economists. [It] inflated the size of the federal government, and politicians turned to economists to make sense of their new complicated initiatives and help rationalize their policies to constituents. Even Milton Friedman, the dark apostle of market fundamentalism, admitted that “ironically, the New Deal was a lifesaver.” Without it, he said, he may have never been employed as an economist. From the mid-1950s to the late 1970s the number of economists in the federal government swelled from about 2,000 to 6,000. The New Deal also gave rise to cost-benefit analysis. Large projects, like dam building or rural electrification, needed to be budgeted and constrained…

The Tyranny of Economists (The New Republic)

This gave rise to the kind of cost/benefit analysis described above, where absolutely everything—human life, the ecosystem, labor, healthy communities, etc.—had its price, and that price became a part of painful-but-necessary “tradeoffs”; a totally new way of thinking about how to govern society. This concept of a cost/benefit analysis, even though it produced distinctive winners and losers, wasn’t seen as a problem, because…

…the government could theoretically redirect a little money from the winners to the losers, to even things out: For example, if a policy caused corn consumption to drop, the government could redirect the savings to aggrieved farmers. However, it didn’t provide any reason why the government would rebalance the scale, just that it was possible.

What is now called the Kaldor-Hicks principle, “is a theory, “ Appelbaum says, “to gladden the hearts of winners: it is less clear that losers will be comforted by the possession of theoretical benefits.” The principle remains the theoretical core of cost-benefit analysis, Appelbaum says. It’s an approach that sweeps the political problems of any policy—what to do about the losers—under the rug.

The Tyranny of Economists (The New Republic)

In fact, many of the proponents of global “free-trade” openly acknowledged that there would inevitably be “winners” and “losers” from such policies. But, they claimed, some of the gains of the winners could be easily siphoned off to compensate the losers, making everyone better off in the long run. Win-win thinking at its finest.

It should be obvious by now what kind of a sick joke that was. It should also be proof positive just how drastically economic theory never matches the reality.

It was also World War two that ushered in the concept of Gross Domestic Product, or GDP (originally Gross National Product, or GNP), which was designed to measure total national output for the war. Even the economists (Kuznets, et. al.) who created it explicitly warned that it was not to be taken as a be-all and end-all measure of societal health or well-being. It was designed to manage the War Economy, and its continual increase was not to be regarded as an end in itself.

Yet that’s exactly what it became thanks to economists.

It was the ultimate triumph of “market society” as Polanyi described it. Markets and money were now the sole governing principles. Political decisions were reduced to simply a series of cost-benefit analyses, freeing politicians from any moral culpability for their decisions. Governing society was no longer about increasing the general welfare as the Framers of the Constitution imagined—it would now be simply about increasing GDP and making the necessary “tradeoffs”.

With the Neoliberal revolution, economists emerged from the basement and took over the place:

Starting in the 1970s…economists began to wield extraordinary influence. They persuaded Richard Nixon to abolish the military draft. They brought economics into the courtroom. They took over many of the top posts at regulatory agencies, and they devised cost-benefit tests to ensure that regulations were warranted. To facilitate this testing, economists presumed to set a number on the value of life itself; some of the best passages of Appelbaum’s fine book describe this subtle revolution. Meanwhile, Fed chairmen were expected to have economic credentials. Soon the noneconomists on the Fed staff were languishing in the metaphorical basement.

But, in the wake of the Powell Memorandum, the biggest beneficiaries were big business, who soon poured bottomless amounts of money into economics departments (such as the one that employs Cowen as well as Summers’s Harvard) and a dizzying array of “think-tanks” which employed the ever-expanding number of economics graduates. Economics soon went from virtual obscurity to one of the most popular majors at American universities, especially for children of the affluent. In the 1980’s, big corporations and the wealthy…

…soon found a powerful ally in economists, a vast majority of whom opposed regulation as inefficient. Corporations began to argue that if the cost of compliance to a new regulation (say seatbelts or lead remediation) exceeded the benefit, it shouldn’t be implemented. The government, starting at the end of Nixon’s administration and continuing to this day, agreed.

Cost-benefit analysis hinged on an ever-changing calculation of the monetary value of a human life. If a life could be shown to be expensive, regulation could be justified. If not, it would be blocked or scrapped. The EPA, in 2004—to allow for more lax air pollution regulations—quietly sliced eight percent off their value of human life, and then another three percent in 2008 by deciding to not adjust for inflation. The fluctuating value of life was a seemingly rational but conveniently opaque method for making political decisions. It simultaneously trimmed away the gray areas of political discourse by reducing the debate to a small set of numbers and obscured the policy in hundreds of pages of statistics, figures, and formulas. This marriage of rational simplicity and technocratic complexity provided cover for regressive policies that favored corporations over taxpayers. Economists reduced a question that dogged political philosophers for centuries—about how much harm is acceptable in a society—to a math problem.

The Tyranny of Economists (The New Republic)

Here’s another particularly vivid example of the results of that type of thinking:

In June of 1985, the Consumer Product Safety Commission issued a “national consumer alert” about the type of sofa chair that strangled [two-year-old Joy] Griffith. But the commission still needed to decide if they would require design changes. So Warren Prunella, the chief economist for the Commission, did some calculations. He figured that 40 million chairs were in use, each of which lasted ten years. Estimates said modifications likely would save about one life per year, and since the commission had decided in 1980 that the value of a life was one million dollars, the benefit of the requirement would be only ten million. This was far below the cost to the manufacturers. So in December, the commission decided that they didn’t need to require chair manufacturers to modify their products. If this seems odd today, it was then too—so odd, in fact, that the chair manufacturers voluntarily changed their designs.

Prunella’s calculations were the result of a growing reliance on cost-benefit analysis, something that the Reagan administration had recently made mandatory for all new government regulations. It signaled the rise of economists to the top of the federal regulatory apparatus. “Economists effectively were deciding whether armchairs should be allowed to crush children,” Binyamin Appelbaum writes in his new book The Economists’ Hour. “The government’s growing reliance on cost-benefit meant that economists like Prunella were exercising significant influence over life and death decisions.” Economics had become a primary language of politics.

The Tyranny of Economists (The New Republic)

And this how we got to the polices of today, where, as Margaret Thatcher confidently declared, “there is no alternative”:

“The United States experienced a revolution. No gun was fired. No lives were lost. Nobody marched. Most people didn’t notice. Nonetheless, it happened.”…what Appelbaum presents could be seen as a picture of a dramatic class-war, a conservative counter-revolution in reaction to the New Deal government, duplicitously legitimized by a regressive political theory: economics. Or as a more bracing economics writer, John Kenneth Galbraith, once put it: “What is called sound economics is very often what mirrors the needs of the respectably affluent.”

The Tyranny of Economists (The New Republic)

To reiterate, Larry Summers was Obama’s (a Democrat) chief economic advisor. What, then, is the difference between the two major political parties again?

…a 1979 survey of economists that “found 98 percent opposed rent controls, 97 percent opposed tariffs, 95 percent favored floating exchange rates, and 90 percent opposed minimum wage laws.” And in a moment of impish humor [Applebaum] notes that “Although nature tends toward entropy, they shared a confidence that economies tend toward equilibrium.” Economists shared a creepy lack of doubt about how the world worked.

The Tyranny of Economists (The New Republic)

No wonder Cowen (who manages the Koch-funded Mercatus Center at George Mason University) is such a fan! And thus you get his laudatory praise of how Summers is always “thinking like an economist” despite his alleged “liberal values”.  So when you are urged, for example, to “think like an economist” you are all but guaranteed to come up with conclusions which overwhelmingly favor the rich and powerful and screw over the rest of us. And all of this is presented as totally nonpolitical and “just common sense!”

Isn’t it funny how “bad economics” is anything that helps labor and the working class?

However, this kind of quasi-religious faith in free trade and free markets has shown a remarkable and disastrous lack of effectiveness in the real world:

Inequality has grown to unacceptable extremes in highly developed economies. From 1980 to 2010, life expectancy for poor Americans scandalously declined, even as the rich lived longer. Meanwhile, the primacy of economics has not generated faster economic growth. From 1990 until the eve of the financial crisis, U.S. real GDP per person grew by a little under 2 percent a year, less than the 2.5 percent a year in the oil-shocked 1970s.

How Economists’ Faith in Markets Broke America (The Atlantic)

…the theories often demonstrably did not do what they were supposed to do. Monetarism didn’t curb inflation, lax antitrust and low regulation didn’t spur innovation, and low taxes didn’t increase corporate investment. Big economic shocks of the 1970s, like the befuddling “stagflation,” provided reasons to abandon previous, more redistributive economic regimes, but a reader still burns to know: How could economists be so wrong, so often, and so clearly at the expense of the working people in the United States, yet still ultimately triumph so totally? It’s likely because what economists’ ideas did do, quite effectively, was divert wealth from the bottom to the top. This entrenched their power among the winners they helped create.

The Tyranny of Economists (The New Republic)

And this type of thinking has now permeated the entire world as Neoliberalism encircled the globe, from Chile to China. And as a result, we see the entire world burning down – metaphorically in the case of places like Chile, Lebanon, Syria, France, Spain, Russia, Indonesia, Hong Kong, and even New York City; and literally in places affected by climate change like California. It’s also led to the majority of the world’s population living under some kind of strong-man authoritarian rule, with surveillance states expanding daily and democracy under dire threat everywhere.

New Delhi now has to distribute gas masks to students for them to even go outside. Isn’t it time we stopped listing to the economists, even the allegedly “liberal” ones like Larry Summers as well as overtly pro-corporate Libertarians like Cowen? In reality, they are all of a piece, and it’s time for these sociopaths to go into the dumpster of history where they belong. John Maynard Keynes himself hoped that economists would eventually become “about as important as dentists.” But that’s drastically unfair to dentists—they are far more useful and have done far less harm to civilization! Carpenters and dentists provide real benefits to society. Economists, however, should probably be treated the way witches were treated in Medieval Europe. To paraphrase Diderot: Man will never be free until the last CEO is strangled with the entrails of the last economist.

The Origin of Paper Money 7

It’s here that we finally get to what’s really the heart of this entire series of posts, which is this: in the West, paper money has been an instrument of revolution.

Both the American and French Revolutions were funded via paper money, and it’s very likely they could not have succeeded without it. It allowed new and fledgling regimes to command necessary resources and fund their armies, which allowed them to take on established states. While such states have mints, a tax base, ownership of natural resources, the ability to write laws, etc.; a rebellion against an established order has none of these things. So, to raise funds, the ability to issue IOUs as payment makes being able to start a revolution far more likely. As we’ve already seen, just about every financial innovation throughout history came about due to the costs of waging wars. Paper money was no exception.

One might even go so far as to say that the American, French and Russian Revolutions would never have been able to happen at all without the invention of paper money!

Washington Crossing the Delaware by Emanuel Leutze, MMA-NYC, 1851.jpg

1. The United States

Earlier we looked at the financial innovations that the colonies undertook to deal with the lack of precious metals in circulation. Wherever paper money and banks had been created, commerce and prosperity increased.

Then it all came to a screeching halt.

The British government passed laws which forbade the issuing and circulation of paper money in the colonies. The monetary experiments came to an end. As you might expect, the domestic economy shrank, and commerce was severely constricted. Of course, the colonists became quite angry at this turn of events.

British authorities initially viewed colonial paper currency favorably because it supported trade with England, but following New England’s “great inflation” in the 1740s, this view changed. Parliament passed the Currency Act of 1751 to strictly limit the quantity of paper currency that could be issued in New England and to strengthen its fiscal backing.

The Act required the colonies to retire all existing bills of credit on schedule. In the future, the colonies could, at most, issue fiat currencies equal to one year’s worth of government expenditures provided that they retired the bills within two years. During wars, colonies could issue larger amounts, provided that they backed all such issuances with taxes and compensated note holders for any losses in the real value of the notes, presumably by paying interest on them.

As a further important constraint on the colonies’ monetary policies, Parliament prohibited New England from making any fiat currency legal tender for private transactions. In 1764, Parliament extended the Currency Act to all of the American colonies.

Paper Money and Inflation in Colonial America (Owen F. Humpage, Economic Commentary, May 13, 2015)

To get around the prohibition on governments issuing paper notes as IOUs, banking may have filled the void. But that option was also cut off by the British government. Last time we saw that the South Sea Bubble, along with a panoply of related schemes, had nearly taken down the entire British economy (as it had done in France). In response, Parliament passed the Bubble Act, which forbade any chartered corporations except those expressly authorized by a Royal Charter. This effectively put the kibosh on banking as an alternative source of paper money in the American colonies.

Given their instinct for experiment in monetary matters, it would have been surprising if the colonists had not discovered or invented banks. They did, and their enthusiasm for this innovation would have been great had it not also been systematically curbed.

In the first half of the eighteenth century the New England colonies, along with Virginia and South Carolina, authorized banking institutions. The most famous, as also the most controversial of these, was the magnificently named Land Bank Manufactory Scheme of Massachusetts which, very possibly, owed something to the ideas of John Law.

The Manufactory authorized the issue of bank notes at nominal interest to subscribers to its capital stock – the notes to be secured, more or less, by the real property of the stockholders. The same notes could be used to pay back the loan that their issue had incurred. This debt could also be repaid in manufactured goods or produce, including that brought into existence by the credit so granted.

The Manufactory precipitated a bitter dispute in the colony. The General Court was favorable, a predisposition that was unquestionably enhanced by the award of stock to numerous of the legislators. Merchants were opposed. In the end, the dispute was carried to London.

In 1741, the Bubble Acts – the British response, as noted, to the South Sea Company and associated promotions and which outlawed joint-stock companies not specifically authorized by law – were declared to apply to the colonies. It was an outrageous exercise in post-facto legestlation, one that helped inspire the Constitutional prohibition against such laws. However, it effectively ended the colonial banks. (Galbraith, pp. 56-57)

Benjamin Franklin, as we have seen, was a longstanding advocate of paper money. He wrote treatises on the subject, and even printed some of it on behalf of the government of Pennsylvania. It was this paper money, he argued, that was the cause of the colonies’ general prosperity in contrast to the widespread poverty and discontent he witnessed everywhere in England:

Before the war, the colonies sent Benjamin Franklin to England to represent their interests. Franklin was greatly surprised by the amount of poverty and high unemployment. It just didn’t make sense, England was the richest country in the world but the working class was impoverished, he wrote “The streets are covered with beggars and tramps.”

It is said that he asked his friends in England how this could be so, they replied that they had too many workers. Many believed, along with Malthus, that wars and plague were necessary to rid the country from man-power surpluses.

“We have no poor houses in the Colonies; and if we had some, there would be nobody to put in them, since there is, in the Colonies, not a single unemployed person, neither beggars nor tramps.” – Benjamin Franklin

He was asked why the working class in the colonies were so prosperous.

“That is simple. In the Colonies, we issue our own paper money. It is called ‘Colonial Scrip.’ We issue it in proper proportion to make the goods and pass easily from the producers to the consumers. In this manner, creating ourselves our own paper money, we control its purchasing power and we have no interest to pay to no one.” – Benjamin Franklin

Soon afterward, the English bankers demanded that the King and Parliament pass a law that prohibited the colonies from using their scrip money. Only gold and silver could be used which would be provided by the English bankers. This began the plague of debt based money in the colonies that had cursed the English working class.

The first law was passed in 1751, and then a harsher law was passed in 1763. Franklin claimed that within one year, the colonies were filled with unemployment and beggars, just like in England, because there was not enough money to pay for the goods and work. The money supply had been cut in half.

Hidden History: According to Benjamin Franklin, the real reason for the Revolutionary War has been hid from you (Peak Prosperity)

A good comment to the above article notes other factors which were also at work:

The timing of the shift in British policy toward colonial scrip (1763) also encompasses…the end of the Seven Years’ War, better known in the United States as the French and Indian War.

William Pitt’s prosecution of the war was conducted by running up government debt, and the settlement of this debt after the war’s conclusion required the raising of taxes by Parliament. Since, from Britain’s view, the war had been fought in order to protect its colonies, it felt that it was only fair that the colonies bore some of the financial burden. Colonial scrip was useless to Parliament in this regard, as was barter. The repayment of British lenders to the Crown could only be done in specie.

The colonies, as you correctly pointed out, did not have this in any significant quantity, although in the view of British authorities this was the colonies’ problem and not theirs. This policy also came on the heels of the approach of benign neglect conducted by Robert Walpole as Prime Minister, under which the colonies were allowed to do pretty much as they pleased so long as their activities generally benefited the British Crown. It should also be noted here that demands of payment of taxes in hard currency is a common tactic for colonial powers to undermine local economies and customs. It played that role in fomenting the American Revolution as well as the Whiskey Rebellion of the new Constitutional republic, not to mention how it was used in South Africa to compel natives participating in a traditional economy to abandon their lands and take up work as laborers in the gold mines.

Hidden History: According to Benjamin Franklin, the real reason for the Revolutionary War has been hid from you (Peak Prosperity)

Now, it would be unreasonable to say that this was THE cause of the American Revolution. In school, we’re taught that that taxes were the main cause: “No taxation without representation” went the slogan (and precipitated the Boston Tea Party). We’re also told that the colonists were much aggrieved by high customs duties, such as those of the unpopular Stamp Act.

But the suppression of paper money and local currency issuance by the British government appears to have been just as much of a cause, although it is probably unknown by the vast majority of Americans. The reason for this strange omission is unexplained. Galbraith thinks that that more conservative attitudes towards money creation in modern times have caused even American historians to argue that the British authorities were largely correct in their actions!

English historian, John Twells, wrote about the money of the colonies, the colonial Scrip:

“It was the monetary system under which America’s Colonies flourished to such an extent that Edmund Burke was able to write about them: ‘Nothing in the history of the world resembles their progress. It was a sound and beneficial system, and its effects led to the happiness of the people.

In a bad hour, the British Parliament took away from America its representative money, forbade any further issue of bills of credit, these bills ceasing to be legal tender, and ordered that all taxes should be paid in coins. Consider now the consequences: this restriction of the medium of exchange paralyzed all the industrial energies of the people. Ruin took place in these once flourishing Colonies; most rigorous distress visited every family and every business, discontent became desperation, and reached a point, to use the words of Dr. Johnson, when human nature rises up and assets its rights.”

Peter Cooper, industrialist and statesman wrote:

“After Franklin gave explanations on the true cause of the prosperity of the Colonies, the Parliament exacted laws forbidding the use of this money in the payment of taxes. This decision brought so many drawbacks and so much poverty to the people that it was the main cause of the Revolution. The suppression of the Colonial money was a much more important reason for the general uprising than the Tea and Stamp Act.”

Our Founding Fathers knew that without financial independence and sovereignty there could be no other lasting freedoms. Our freedoms and national sovereignty are being lost because most people do not understand our money system…

Hidden History: According to Benjamin Franklin, the real reason for the Revolutionary War has been hid from you (Peak Prosperity)

If paper money was the cause of the American Revolution, it was also the solution. The Continental Congress issued IOUs to pay for the war – called ‘Continental notes’ or ‘Continental scrip’:

With independence the ban by Parliament on paper money became, in a notable modern phrase, inoperative. And however the colonies might have been moving towards more reliable money, there was now no alternative to government paper…

Before the first Continental Congress assembled, some of the colonies (including Massachusetts) had authorized note issues to pay for military operations. The Congress was without direct powers of taxation; one of its first acts was to authorize a note issue. More states now authorized more notes.

It was by these notes that the American Revolution was financed….

Robert Morris, to whom the historians have awarded the less than impeccable title of ‘Financier of the Revolution’, obtained some six-and-a-half million dollars in loans from France, a few hundred thousand from Spain, and later, after victory was in prospect, a little over a million from the Dutch. These amounts, too, were more symbolic than real. Overwhelmingly the Revolution was paid for with paper money.

Since the issues, Continental and state, were far in excess of any corresponding increase in trade, prices rose – at first slowly and that, after 1777, at a rapidly accelerating rate…Eventually, in the common saying, ‘a wagon-load of money would scarcely purchase a wagon-load of provisions’. Shoes in Virginia were $5000 a pair in the local notes, a full outfit of clothing upwards of a million. Creditors sheltered from their debtors like hunted things lest they be paid off in worthless notes. The phrase ‘not worth a Continental’ won its enduring place in American language. (Galbraith, pp. 58-59)

Despite this painful bout of hyperinflation, as Galbraith notes, there was simply no other viable alternative to fund the Revolutionary War at the time:

Thus the United States came into existence on a full tide not of inflation but of hyperinflation – the kind of inflation that ends only in the money becoming worthless. What is certain, however, is the absence of any alternative.

Taxes, had they been authorized by willing legislators on willing people, would have been had, perhaps impossible to collect in a country of scattered population, no central government, not the slightest experience in fiscal matters, no tax-collection machinery and with its coasts and numerous of its ports and customs houses under enemy control.

And people were far from willing. Taxes were disliked for their own sake and also identified with foreign oppression. A rigorous pay-as-you-go policy on the part of the Continental Congress and the states might well have caused the summer patriots (like the monetary conservatives) to have second thoughts about the advantages of independence.

Nor was borrowing an alternative. Men of property, then the only domestic source, had no reason to think the country a good risk. The loans from France and Spain were motivated not by hope of return but by malice towards an ancient enemy.

So only the notes remained. By any rational calculation, it was the paper money that saved the day. Beside the Liberty Bell there might well be a tasteful replica of a Continental note. (Galbraith, p. 60)

While this is often used as yet another cautionary tale of “government money printing” by libertarians and goldbugs, a couple of things need to be noted. The first, and most obvious is the fact that: without government money printing there would be no United States. That seems like an important point to me.

The second is a take from Ben Franklin himself. He argued that inflation is really just a sort of tax by another name. And, as opposed to “conventional” government taxation, the inflationary tax falls more broadly across the population, meaning that it was actually a more even-handed and fair method of taxation!

And you can kind of see his point. With legislative taxes, government always has to decide who and what to tax—and how much. This inevitably means that the government picks winners and losers by necessity. Sometimes this can be done wisely, but in practice it often is not. But an inflationary tax cannot be easily controlled by government legislation to favor privileged insiders, unlike more conventional methods of direct taxation, where the rich and well-connected are often spared much of the burden thanks to undue influence over legislators:

From 1776 to 1785 Franklin serves as the U.S. representative to the French court. He has the occasion to write on one important monetary topic in this period, namely, the massive depreciation of Congress’ paper money — the Continental dollar — during the revolution. In a letter to Joseph Quincy in 1783, Franklin claims that he predicted this outcome and had proposed a better paper money plan, but that Congress had rejected it.

In addition, around 1781 Franklin writes a tract called “Of the Paper Money of America.” In it he argues that the depreciation of the Continental dollar operated as an inflation tax or a tax on money itself. As such, this tax fell more equally across the citizenry than most other taxes. In effect, every man paid his share of the tax according to how long he retained a Continental dollar between the time he received it in payment and when he spent it again, the intervening depreciation of the money (inflation in prices) being the tax paid.

Benjamin Franklin and the Birth of a Paper Money Economy (PDF; Philidelphia Fed)

I’m not sure that many people would agree with that sentiment today, but it is an interesting take on the matter.

Once the war was won, and with the Continental notes inflating to zero, the new fledgling government could now issue money for real. The first government building constructed by the new government was the mint. The power to tax was authorized by Congress.

Although the war ended in 1783, the finances of the United States remained somewhat chaotic through the 1780s. In 1781, successful merchant Robert Morris was appointed superintendent of finance and personally issued “Morris notes”—commonly called Short and Long Bobs based on their tenure or time to maturity—and thus began the long process to reestablish the government’s credit.

In 1785, the dollar became the official monetary unit of the United States, the first American mint was established in Philadelphia in 1786, and the Continental Congress was finally given the power of taxation to pay off the debt in 1787, thus bringing together a more united fiscal, currency, and monetary policy.

Crisis Chronicles: Not Worth a Continental—The Currency Crisis of 1779 and Today’s European Debt Crisis (Liberty Street)
One of the more common silver coins used all over the world at this time was the Maria Theresa thaler (or taler), issued by the Holy Roman Empire from its silver mines in Joachimsthal, hence the name (today the town of Joachimsthal is known as Jáchymov and is located in the Czech Republic).

“Taler” became a common name for currency because so many German states and municipalities picked it up. During the sixteenth century, approximately 1,500 different types of taler were issued in the German-speaking countries, and numismatic scholars have estimated that between the minting of the first talers in Jáchymov and the year 1900, about 10,000 different talers were issued for daily use and to commemorate special occasions.

The most famous and widely circulated of all talers became known as the Maria Theresa taler, struck in honor of the Austrian empress at the Gunzberg mint in 1773…The coin…became so popular, particularly in North Africa and the Middle East that, even after she died, the government continued to mint it with the date 1780, the year of her death.

The coin not only survived its namesake but outlived the empire that had created it. In 1805 when Napoleon abolished the Holy Roman Empire, the mine at Gunzberg closed, but the mint in Vienna continued to produce the coins exactly as they had been with the same date, 1780, and even with the mintmark of the closed mint. The Austro-Hungarian government continued to mint the taler throughout the nineteenth century until that empire collapsed at the end of World War I.

Other countries began copying the design of the Maria Theresa taler shortly after it went into circulation. They minted coins of a similar size and put on them a bust of a middle-aged woman who resembled Maria Theresa. Of they did not have a queen of their own who fit the description, they used an allegorical female such as the bust of Liberty that appeared on many U.S. coins of the nineteenth century.

The name dollar penetrated the English language via Scotland. Between 1567 and 1571, King James VI issued a thirty-shilling piece that the Scots called the sword dollar because of the design on the back of it. A two-mark coin followed in 1578 and was called the thistle dollar.

The Scots used the name dollar to distinguish their currency, and thereby their country and themselves, more clearly from their domineering English neighbors to the south. Thus, from very early usage, the word dollar carried with it a certain anti-English or antiauthoritarian bias that many Scottish settlers took with them to their new homes in the Americas and other British colonies. The emigration of Scots accounts for much of the subsequent popularity of the word dollar in British colonies around the world… (Weatherford, History of Money, pp. 115-116)

In 1782, Thomas Jefferson wrote in his Notes on a Money Unit of the U.S. that “The unit or dolar is a known coin and most familiar of all to the mind of the people. It is already adopted from south to north.”

The American colonists became so accustomed to using the dollar as their primary monetary unit that, after independence, they adopted it as their official currency. On July 6, 1785, the Congress declared that “the money unit of the United States of America be one dollar.” Not until April 2, 1792, however, did Congress pass a law to create an American mint, and only in 1794 did the United States begin minting silver dollars. The mint building, which was started soon after passage of the law and well before the Capitol or White House, became the first public building constructed by the new government of the United States. (Weatherford, History of Money, p. 118)

In the nineteenth century, there were strong arguments around the establishment of a central bank in the United States. One was, in fact, chartered, and then its charter was later revoked. We’ll talk a little about this in the final entry of this series next time, but for now, it is beyond the scope of this post.

Scene from the French Revolution

2. France

In the late eighteenth century, France’s financial circumstances were still very dire. It constantly needed to raise money for its perennial wars with England who, as we saw earlier, successfully funded its own wars with paper money and state borrowing via the Bank of England an—option not available to France in the wake of the Mississippi Bubble’s collapse and the failure of John Law’s Banque Royale. France’s generous loan to the United States’ revolutionaries may have been well appreciated by us Americans, but in retrospect, it was probably not the best move considering France’s fiscal situation (plus the fact that Revolution would soon engulf it; something the French aristocracy obviously had no way of knowing at the time).

In the aftermath of the Revolution, the National Assembly repudiated the King’s debts. It also suspended taxation. But it still badly needed money, especially since many of the countries surrounding France (e.g. Austria, Prussia, Great Britain, Spain and several other monarchies) declared war on it soon after the King met the guillotine. The answer they came up with was, once again, monetizing land. In this case, it was the land seized from the Catholic Church by the Revolutionary government. “[T]he National Assembly agreed that newly nationalised properties in the form of old church land could be purchased through the use of high-denomination assignats, akin to interest-bearing government bonds, mortgaged (assignée) on the property.”

The Estates-General had been summoned in consequence of the terrible fiscal straits of the realm. No more could be borrowed. There was no central bank which could be commanded to take up loans. All still depended on the existence of willing lenders or those who could be apprehended and impressed with their duty.

The Third Estate could scarcely be expected to vote new or heavier levies when its members were principally concerned with the regressive harshness of those then being collected. In fact, on 17 June 1789 the National Assembly declared all taxes illegal, a breathtaking step softened by the provision that they might be collected on a temporary basis.

Meanwhile memories of John Law kept Frenchmen acutely suspicious of ordinary paper money; during 1788, a proposal for an interest-bearing note issue provoked so much opposition that it had to be withdrawn. But a note issue that could be redeemed in actual land was something different. The clerical lands were an endowment by Heaven of the Revolution.

The decisive step was taken on 19 December 1789. An issue of 400 million livres was authorized; it would, it was promised, ‘pay off the public debt, animate agriculture and industry and have the lands better administered’. These notes, the assignats, were to be redeemed within five years from the sale of an equivalent value of the lands of the Church and the Crown.

The first assignats bore interest at 5 per cent; anyone with an appropriate amount could use them directly in exchange for land. In the following summer when a new large issue was authorized, the interest was eliminated. Later still, small denominations were issued.

There were misgivings. The memory of Law continued to be invoked. An anonymous American intervened with Advice on the Assignats by a Citizen of the United States. He warned the Assembly against the assignats out of the rich recent experience of his own country with the Continental notes. However, the initial response to the land-based currency was generally favourable.

Had it been possible to stop with the original issue or with that of 1790, the assignats would be celebrated as a remarkably interesting innovation. Here was not a gold, silver or tobacco standard but one based solidly and logically on the good soil of France.

Purchasing power in the first years had stood up well. There was admiring talk of how the assignats had put land into circulation. And business had improved, employment had increased and sales of the Church and other public lands had been facilitated. On occasion, sales had been too good. In relation to annual income, the prices set were comparatively modest; speculators clutching large packages of the assignats had arrived to take advantage of the bargains.

However, in France, as earlier in America, the demands of revolution were insistent. Although the land was limited, the claims upon it could be increased.

The large issue of 1790 was followed by others – especially after war broke out in 1792. Prices denominated in assignats now rose; their rate of exchange for gold and silver, dealing in which had been authorized by the Assembly, declined sharply. In 1793 and 1794, under the Convention and the management of Cambon, there was a period of stability. Prices were fixed with some success. What could have been more important, the supply of assignats was curtailed by the righteous device of repudiating those that had been issued under the king. In those years they retained a value of around 50 per cent of their face amount when exchanged for gold and silver.

Soon, however, need again asserted itself. More and more were printed. In an innovative step in economic warfare, Pitt, after 1793, allowed the royalist emigres to manufacture assignats for export to France. This, it was hoped, would hasten the decay.

In the end, the French presses were printing one day to supply the needs of the next. Soon the Directory halted the exchange of good real estate for the now nearly worthless paper – France went off the land standard. Creditors were also protected from having their debts paid in assignats. This saved them from the ignominy of having (as earlier in America) to hide out from their debtors. (Galbraith, pp. 64-66)

The lands of aristocrats who had fled France was confiscated as well and used to back further issuances of paper currency. Despite this, as with the Continentals, the value of the assignats soon inflated away to very little. France then issued a new paper money, the mandats territoriaux, also carrying an entitlement to land, in an attempt to stabilize the currency. But distrust in the paper currency (and in the government) was so endemic that the mandats began to depreciate even before they were issued:

With the sale of the confiscated property, a great debtor class emerged, which was interested in further depreciation to make it cheaper to pay back debts. Faith in the new currency faded by mid-year 1792. Wealth was hidden abroad and specie flowed to surrounding countries with the British Royal Mint heavily purchasing gold, particularly in 1793 and 1794.

But deficits persisted and the French government still needed to raise money, so in 1792, it seized the land of emigrants and those who had fled France, adding another 2 billion livres or more to French assets. War with Belgium that year was largely self-funded as France extracted some rents, but not so for the war with England in 1793. Assignats no longer circulated as a medium of payment, but were an object of speculation. Specie was scarce, but sufficient, and farmers refused to accept assignats, which were practically demonetized. In February 1793, citizens of Paris looted shops for bread they could no longer afford, if they could find it at all.

In order to maintain its circulation, France turned to stiff penalties and the Reign of Terror extended into monetary affairs. During the course of 1793, the Assembly prohibited buying gold or silver at a premium, imposed a forced loan on a portion of the population, made it an offense to sell coin or differentiate the price between assignats and coin, and under the Law of the Maximum fixed prices on some commodities and mandated that produce be sold, with the death penalty imposed for infractions.

France realized that to restore order, the volume of paper money in circulation must decrease. In December 1794, it repealed the Law of the Maximum. In January 1795, the government permitted the export of specie in exchange for imports of staple goods. Prices fluctuated wildly and the resulting hyperinflation became a windfall for those who purchased national land with little money down. Inflation peaked in October 1795. In February 1796, in front of a large crowd, the assignat printing plates were destroyed.

By 1796, assignats gave way to specie and by February 1796, the experiment ended. The French tried to replace the assignat with the mandat, which was backed by gold, but so deep was the mistrust of paper money that the mandat began to depreciate before it was even issued and lasted only until February 1797…

Crisis Chronicles: The Collapse of the French Assignat and Its Link to Virtual Currencies Today (Liberty Street)

…In February 1797 (16 Pluvoise year V), the Directory returned to gold and silver. But by then the Revolution was an accomplished fact. It had been financed, and this the assignats had accomplished. They have at least as good a claim on memory as the guillotine. (Galbraith, p. 66)

FRA-A73-République Française-400 livres (1792) 2.jpgEventually, France’s money system stabilized once its political situation more-or-less stabilized, but entire books have been written about that subject. The military dictatorship of Napoleon Bonaparte sold off the French lands in North America to the United States to raise money for its wars of conquest on the European continent. Napoleon also finally established a central bank in France based on the British model.

In 1800, the lingering suspicion of the French of such institutions had yielded to the financial needs of Napoleon. There had emerged the Banque de France which, in the ensuing century, developed in rough parallel with the Bank of England. In 1875, the former Bank of Prussia became the Reichsbank. Other countries had acquired similar institutions or soon did…(Galbraith, p. 41)

It might to be going too far to say that without paper money, neither the American or French revolutions would have ever happened. But nor is entirely absurd to say that this may well be the case. It’s certainly doubtful that they would have succeeded. It’s difficult to imagine how much history would be different today had it not been for paper money and its role in revolution.

Paper money would continue to play that role throughout the Age of Revolutions well into the Twentieth Century, as Galbraith notes:

Paper was similarly to serve the Soviets in and after the Russian Revolution. By 1920, around 85 per cent of the state budget was being met by the manufacture of paper money…

In the aftermath of the Revolution the Soviet Union, like the other Communist states, became a stern defender of stable prices and hard money. But the Russians, no less than the Americans or the French, owe their revolution to paper.

Not that the use of paper money is a guarantee of revolutionary success. In 1913, in the old Spanish town of Chihuahua City, Pancho Villa was carrying out his engaging combination of banditry and social reform. Soldiers were cleaning the streets, land was being given to the peons, children were being put in schools and Villa was printing paper money by the square yard.

This money could not be exchanged for any better asset. It promised nothing. It was sustained by no residue of prestige or esteem. It was abundant. Its only claim to worth was Pancho Villa’s signature. He gave this money to whosoever seemed to be in need or anyone else who struck his fancy. It did not bring him success, although he did, without question, enjoy a measure of popularity while it lasted. But the United States army pursued him; more orderly men intervened to persuade him to retire to a hacienda in Durango. There, a decade later, when he was suspected by some to be contemplating another foray into banditry, social reform, and monetary policy, he was assassinated. (Galbraith, pp. 66-67)

3. Conclusions

Given that both the Continentals and the assignats both suffered from hyperinflation towards the end, they have been often held up as a cautionary tale: governments are inherently profligate and can not be trusted with money creation; only by strictly pegging paper money issuance to a cache of gold stashed away in vault somewhere can hyperinflation be avoided.

As Galbraith notes, this is highly selective. Sure, if you look just for instances of paper money overissuance and inflation you will find them. But this is also deliberately ignoring instances–often lasting for decades if not for centuries–that paper money functioned exactly as intended all across the globe; from ancient China, to colonial America, to modern times. It emphasizes the inflationary scare stories, but intentionally ignores the very real stimulus to commercial activity that paper money has provided, as opposed to the extreme constraints of a precious metal standard. It also totally ignores any extenuating circumstances in hyperinflations, such as Germany’s repayment of war debt in the twentieth century, or persistent economic warfare in the case of Venezuela today.

So the attitude that “government simply can’t be trusted” is more of a political opinion than something based on historical facts.

…in the minds of some conservatives…there must have been a lingering sense of the singular service that paper money had, in the recent past, rendered to revolution. Not only was the American Revolution so financed. So also was the socially far more therapeutic eruption in France. If the French citizens had been required to act within the canons of conventional finance, they could not, any more than the Americans, acted at all. (Galbraith, pp. 61-62)

The desire for a gold standard comes from a desire to anchor the value of money in something outside of the control of governments. But, of course, pegging the value of currency to a certain arbitrary amount of gold is a political choice. Nor does it guarantee price stability–the value of gold fluctuates. A gold standard is more of a guarantee of the stability of the price of gold than the stability of the value of money. Also, in almost every case of war and economic depression in modern history, the gold standard is immediately chucked into the trashbin.

The other thing worth noting is that the worth of paper money is related to both issues of supply AND demand. Often, it’s not just that there is too much supply of currency. It’s that people refuse to accept the currency, leading just as assuredly to a loss in value.

And the lack of acceptance is usually driven by a lack of faith in the issuing government. You can see why this might be the case for assignats and Continentals. Both were revolutionary governments whose very stability and legitimacy was in question, particularly in France. If the government issuing the currency (which are IOU’s, remember) may not be around a year from now, then how willing are you to accept that currency? James Madison pointed out that the value of any currency was mostly determined by faith in the credit of the government issuing the currency. That’s why he and other Founding Fathers worked so hard to reestablish the credit of the United States following the Continental note debacle.

As Rebecca Spang—the author of “Stuff and Money in the Time of the French Revolution”—notes, many people in revolutionary France were vigorously opposed to the seizing of Church property. Thus, they would not accept the validity of notes based on their value. This led to a lack of acceptance which contributed just as much to hypernflation as did any profligacy on the part of the government:

Revolutionary France became a paradigm case for the quantity theory of money, the view that prices are directly and proportionately correlated with the amount of money in circulation, and for the deleterious consequences of letting the latter run out of control.

Yet Spang shows that such neat economic interpretations are inadequate. At times, for example, prices rose first and politicians boosted the money supply in response.

Spang reiterates that the first assignats were neither a revolutionary policy nor a form of paper money. But as her stylishly crafted narrative makes clear, this soon changed. Politicians made the cardinal error of thinking that the state could be stabilised by in effect destabilising its money.

Popular distrust of the “real” worth of assignats prompted a contagion of fraud, suspicion and uncertainty. How could one tell a fake assignat, when technology couldn’t replicate them precisely? How could they even be used, when there was no compulsion beyond patriotic duty for sellers to accept them as payment? Small wonder that so many artists made trompe l’oeil images out of them — what looked solid and real was anything but…

‘Stuff and Money in the Time of the French Revolution’, by Rebecca Spang (Financial Times)

Note that the situation of a stable government is totally different. Britain’s government was eminently stable compared to the United States and France at that time, hence its money retained most of its value, even when convertibility was temporarily suspended. This also underlies the value of Switzerland’s currency today, since they have a legendarily stable, neutral government (and really not that much in the way of actual  resources).

So those who argue that America’s “fiat” money is no good would somehow have to make the case that the United States government is somehow more illegitimate or more unstable than the governments of other wealthy, industrialized nations. To my mind, this is tangential to treason. Yet no one ever calls them out on this point. From that standpoint, the biggest threat to the money supply comes not from overissuance (hyperinflation is nowhere to be seen), but from undermining the faith in, and credit of, the United States government. That’s been done exclusively by Republicans in recent years by grandstanding over the debt ceiling—an artificial borrowing constraint imposed during the United States’ entry into World War One. Really, this should be considered an unpatriotic and treasonous act. It almost certainly would have been perceived as such by the Founding Fathers.

I always have the same response to libertarians who sneer at the “worthlessness” of government fiat money. My response is this: if you truly believe it is worthless, then I will gladly take it off your hands for you. Please hand over all the paper money you have in your wallet right now at this very moment, as well as all the paper money you may have lying around your house. If you want, you can even take out some “worthless” paper money from the nearest ATM and hand it over to me too; I’ll gladly take that off your hands as well. You can give me as much as you like.

To date, I have yet to have a libertarian take me up on that offer. I wonder why?

Next: The Civil War finally establishes a national paper currency for the U.S.

The Origin of Paper Money 6

1. France

France ended up conducting its own monetary experiment with paper money at around the same time as the American colonies in the early 1700s. Unlike the American experiment, it was not successful. It would be initiated by an immigrant Scotsman fleeing a murder charge (and gambling addict) by the name of John Law. (Jean Lass in French).

At this time—the early 1700’s—France was having much the same conversation around the money supply as in the Anglo-Saxon world. There, the problem was not so much a shortage of  coins, but an excess of sovereign debt due to the wild spending sprees of France’s rulers on foreign wars and luxury lifestyles.

Despite being probably the most wealthy and powerful nation in Western Europe, France’s debts (really, the King’s debts) exceeded its assets by quite a bit, at least on paper. The country struggled to raise enough funds via its antiquated and inefficient feudal tax system to pay the interest on its bonds; France’s debt traded in secondary markets as what we might today call junk bonds (i.e. low odds of repayment).

Louis XIV, having lived too long, had died the year before Law’s arrival. The financial condition of the kingdom was appalling: expenditures were twice receipts, the treasury was chronically empty, the farmers-general of the taxes and their horde of subordinate maltôtiers were competent principally in the service of their own rapacity.

The Duc de Saint-Simon, though not always the most reliable counsel, had recently suggested that the straightforward solution was to declare national bankruptcy – repudiate all debt and start again. Philippe, Duc d’Orleans, the Regent for the seven-year-old Louis XV, was largely incapable of thought or action.

Then came Law. Some years earlier, it is said, he had met Philippe in a gambling den. The latter ‘had been impressed with the Scotsman’s financial genius.’ Under a royal edict of 2 May 1716, law, with his brother, was given the right to establish a bank with capital of 6 million livres, about 250,000 English pounds…
(Galbraith, pp. 21-22)

…The creation of the bank proceeded in clear imitation of the already successful Bank of England. Under special license from the French monarch, it was to be a private bank that would help raise and manage money for the public debt. In keeping with his theories on the benefits of paper money, Law immediately began issuing paper notes representing the supposedly guaranteed holdings of the bank in gold coins.

Law’s…bank that took in gold and silver from the public and lent it back out in the form of paper money. The bank also took deposits in the form of government debt, cleverly allowing people to claim the full value of debts that were trading at heavy discounts: if you had a piece of paper saying the king owed you a thousand livres, you could get only, say, four hundred livres in the open market for it, but Law’s bank would credit you with the full thousand livres in paper money. This meant that the bank’s paper assets far outstripped the actual gold it had in store, making it a precursor of the “fractional-reserve banking” that’s normal today. Law’s bank had, by one estimate, about four times as much paper money in circulation as its gold and silver reserves…

The new paper money had an attractive feature: it was guaranteed to trade for a specific weight of silver, and, unlike coins, could not be melted down or devalued. Before long, the banknotes were trading at more than their value in silver, and Law was made Controller General of Finances, in charge of the entire French economy.

The Invention of Money (The New Yorker)

It’s also worth noting that banknotes were denominated in the unit of account, unlike coins which typically were not. Coins’ value usually fluctuated against the unit of account (what prices were expressed in), sometimes by the day. What a silver sovereign or gold Louis d’Or was worth on one day might be different that the next, especially since the monarchs liked to devalue the currency in order to decrease the amount of their debts. However, if you brought, say, 10 livres, 18 sous worth of coins to Law’s bank, the paper banknote would be written up for the equivalent amount the coins were worth at that time: 10 livres, 18 sous.

By buying back the government’s debt, Law was able to “retire” it. Thus, the money circulating was ultimately backed by government debt (bonds), just like our money today. Law’s promise to redeem the notes for specie gave users the confidence to use them. Later on, the government will decree the notes of the Banque Generale as the “official” money to be used in payment of taxes and settlement of all debts, legitimizing their value by fiat. Law later attempted to sever the link to gold and silver by demonetizing the latter. He was not successful; paper money was far too novel at the time for people to trust its value in the absence of anything tangible backing it.

Not much of what transpired was that unusual for today, but it was pretty radical for the early 1700s. Had Law stopped at this point, it’s likely that all of this would have been successful, as Galbraith points out:

In these first months, there can be no doubt, John Law had done a useful thing. The financial position of the government was eased. The bank notes loaned to the government and paid out by it for its needs, as well as those loaned to private entrepreneurs, raised prices….[and] the rising prices…brought a substantial business revival.

Law opened branches of his bank in Lyons, La Rochelle, Tours, Amiens and Orleans; presently, in the approximate modern language, he went public. His bank became a publicly chartered company, the Banque Royale.

Had Law stopped at this point, he would be remembered for a modest contribution to the history of banking. The capital in hard cash subscribed by the stockholders would have sufficed to satisfy any holders of notes who sought to have them redeemed. Redemption being assured, not many would have sought it.

It is possible that no man, having made such a promising start, could have stopped…
(Galbraith, pp. 22-23)

Trading government debt for paper money helped lower the government’s debts, but on paper, France’s liabilities still exceeded its assets. But it had one asset that had not yet been monetized—millions of acres of land on the North American continent. So Law set out to monetize that land by turning it into shares in a joint-stock company called the Mississippi Company (Compagnie d’Occident). The Mississippi Company had a monopoly on all trading with the Americas. Buying a share in the company meant a cut of the profits (i.e. equity) of trading with North America.

The first loans and the resulting note issue having been visibly beneficial – and also a source of much personal relief – the Regent proposed and additional issue. If something does good, more must do better. Law acquiesced.

Sensing the need, he also devised a way of replenishing the reserves with which the Banque Royale backed up its growing volume of notes. Here he showed that he had not forgotten his original idea of a land bank.

His idea was to create the Mississippi Company to exploit and bring to France the very large gold deposits which Louisiana was thought to have as subsoil. To the metal so obtained were also to be added the gains of trade. Early in 1719, the Mississippi Company (Compagnie d’Occident), later the Company of the Indies, was gives exclusive trading privileges in India, China and the South Seas. Soon thereafter, as further sources of revenue, it received the tobacco monopoly, the right to coin money and the tax farm. (Galbraith, p. 23)

Law—or the Duc d’Arkansas as he was now known—talked up the corporation so well that the value of the shares skyrocketed—probably the world’s very first stock bubble (but hardly the last). Gambling fever was widespread and contagious, as the desire to get rich by doing nothing is a human universal. The term “millionaire” was coined. Law took advantage of the inflated share price to buy back more of the government’s debt. And the money to buy the shares at the inflated prices was printed by the bank itself. Knowing that there was far more paper than gold and silver to back it in the kingdom, Law then tried to break the link between paper money and specie by demonetizing gold and silver; at one point making it illegal to even hold precious metals.

He was unsuccessful. Paper money was still too new, and people were unwilling to trust it without the backing of previous metal, causing a loss of faith in the currency. Later suspensions of convertibility were done after generations of paper money use. Law’s entire scheme (from origin to collapse) took place over the course of less than a year.

[Law] funded the [Mississippi] company the same way he had funded the bank, with deposits from the public swapped for shares. He then used the value of those shares, which rocketed from five hundred livres to ten thousand livres, to buy up the debts of the French King. The French economy, based on all those rents and annuities and wages, was swept away and replaced by what Law called his “new System of Finance.”

The use of gold and silver was banned. Paper money was now “fiat” currency, underpinned by the authority of the bank and nothing else. At its peak, the company was priced at twice the entire productive capacity of France…that is the highest valuation any company has ever achieved anywhere in the world.

Galbraith and Weatherford summarize the shell game that Law’s “system” ended up becoming:

To simplify slightly, Law was lending notes of the Banque Royale to the government (or to private borrowers) which then passed them on to people in payment of government debts or expenses. These notes were then used by the recipients to buy stock in the Mississippi Company, the proceeds from which went to the government to to pay expenses and pay off creditors who then used the notes to buy more stock, the proceeds from which were used to meet more government expenditures and pay off more public creditors. And so it continued, each cycle being larger than the one before. (Galbraith, p. 24)

The Banque Royale printed paper money, which investors could borrow in order to buy stock in the Mississippi company; the company then used the new notes to pay out its bogus profits. Together the Mississippi Company and the Banque Royale were producing paper profits on each other’s accounts. They bank had soon issued twice as much paper money as there was specie in the whole country; obviously it could no longer guarantee that each paper note would be redeemed in gold. (Weatherford, p. 131)

Such a scheme couldn’t last, of course. Essentially the entire French economy—its central bank, its money supply, its tax system, and the monopoly on land in North America—were in the hands of one single, giant conglomerate run by one man. That meant that when one part of the system failed, all the rest went down like ascending mountain climbers roped together.

Because the central bank owned the Mississippi company, it had an incentive to loan out excess money to drive the share price up—in other words, to inflate a stock bubble based on credit. This is always a bad idea. Finally, Law’s exaggeration of the returns on investments in the Mississippi Company inflated expectations far beyond what was realistic.

The popping of the Mississippi stock bubble, followed by a run on the bank, was enough to bring the whole thing crashing down.

People started to wonder whether these suddenly lucrative investments were worth what they were supposed to be worth; then they started to worry, then to panic, then to demand their money back, then to riot when they couldn’t get it.

Gold and silver were reinstated as money, the company was dissolved, and Law was fired, after a hundred and forty-five days in office. In 1720, he fled the country, ruined. He moved from Brussels to Copenhagen to Venice to London and back to Venice, where he died, broke, in 1729.

The Invention of Money (The New Yorker)

As Law must have known, if you gamble big, sometimes you lose big.

Some of the death of the Bank was murder, not suicide. As part of his System, one of Law’s initiatives was to simplify and modernize the inefficient and antiquated French tax system. Taxes were collected by tax farmers (much as in ancient Rome), and Law threatened to overturn their apple cart. He also attempted to end the sale of government offices to the highest bidder. This made him a lot of enemies among the moneyed classes, who thrived on graft and corruption. Such influential people (notably the financiers the Paris brothers), were instrumental in the run on the bank and the subsequent loss of confidence in the money system:

[Law] set about streamlining a tax system riddled with corruption and unnecessary complexity. As one English visitor to France in the late seventeenth century observed. “The people being generally so oppressed with taxes, which increase every day, their estates are worth very little more than what they pay to the King; so that they are, as it were, tenants to the Crown, and at such a rack rent that they find great difficulty to get their own bread.” The mass of offices sold to raise money had caused one of Louis XIV’s ministers to comment, “When it pleases Your Majesty to create an office, God creates a fool to purchase it.” There were officials for inspecting the measuring of cloth and candles; hay trussers; examiners of meat, fish and fowl. There was even an inspector of pigs’ tongues.

This did nothing for efficiency, Law deemed, and served only to make necessities more expensive and to encourage the holders of the offices “to live in idleness and deprive the state of the service they might have done it in some useful profession, had they been obliged to work.” In place of the hundreds of old levies he swept away (over forty in one edict alone), Law introduced a new national taxation system called the denier royal, based on income. The move caused an outcry among the holders of offices, many of whom were wealthy financiers and members of the Parliament, but delight among the public. “The people went dancing and jumping about the streets,” wrote Defoe. “They now pay not one farthing tax for wood, coal, hay, oats, oil, wine, beer, bread, cards, soap, cattle, fish.” (Janet Gleeson, Millionaire; pp. 155-156

Michel Aglietta, in his magisterial work on money, notes that Law…

…wanted to introduce the logic of capitalism in France, based on providing credit through money creation. Money creation had to be based on expected future wealth, and no longer on the past wealth accumulated in precious metals. (Aglietta, p. 206, emphasis in original)

The danger is, if this wealth fails to materialize; or if people lose the belief that it will materialize, confidence in the system is lost, and failure soon follows.

Although John Law has come down in history as a grifter, and his ideas as fundamentally unsound, many of his ideas eventually became fundamental tenets of modern global finance:

The great irony of Law’s life is that his ideas were, from the modern perspective, largely correct. The ships that went abroad on behalf of his great company began to turn a profit. The auditor who went through the company’s books concluded that it was entirely solvent—which isn’t surprising, when you consider that the lands it owned in America now produce trillions of dollars in economic value.

Today, we live in a version of John Law’s system. Every state in the developed world has a central bank that issues paper money, manipulates the supply of credit in the interest of commerce, uses fractional-reserve banking, and features joint-stock companies that pay dividends. All of these were brought to France, pretty much simultaneously, by John Law.

The Invention of Money (The New Yorker)

Law’s efforts left a lingering suspicion of paper money in France. Unfortunately, the revenues problem was not definitively solved. Going back on a specie standard delivered a huge blow to commerce. While England’s paper money system flourished, France stagnated economically. Eventually, the revenues situation of the government became so dire that the King had no choice but to call an Estates General—the extremely rare parliamentary session that kicked off the French Revolution—in 1789.

Once the Mississippi bubble burst, a lot of the capital in France needed some new outlet to invest in. Much of that capital fled across the channel to England, which at the time was inflating a stock bubble of its own:

France’s ruin was England’s gain. Numerous bruised Mississippi shareholders chose to reinvest in English South Sea shares.
The previous month, with a weather eye to developments in France, the South Sea Company managed to beat its rival the Bank of England and secure a second lucrative deal with the government whereby it took over a further $48 million of national debt and launched a new issue of shares. A multitude of English and foreign investors were now descending on London as they had flocked less than a year earlier to Paris “with as much as they can carry and subscribing for or buying shares.”

In Exchange Alley–London’s rue Quincampoix–the sudden surce of new money also bubbled a plethora of alternative companies launched to capitalize on the new fashion for financial fluttering… (Gleeson, p, 200)

2. England

Britain chose a different tack – sovereign debt would be monetized and circulate as money. It too utilized the joint-stock company model that had been invented in the previous centuries to enable the Europeans to raise the funds to exploit and colonize the rest of the world. A bank was founded as a chartered company to take in money through subscribed shares and loan out that money to the King. That debt—and not land—would securitize the notes issued by the bank. The notes would then circulate as money, albeit alongside precious metal coins and several other forms of payment. As with the original invention of sovereign debt in northern Italy, it was used to raise the necessary funds for war:

The modern system for dealing with [the] problem [of funding wars] arose in England during the reign of King William, the Protestant Dutch royal who had been imported to the throne of England in 1689, to replace the unacceptably Catholic King James II.

William was a competent ruler, but he had serious baggage—a long-running dispute with King Louis XIV of France. Before long, England and France were involved in a new phase of this dispute, which now seems part of a centuries-long conflict between the two countries, but at the time was variously called the Nine-Years’ War or King William’s War. This war presented the usual problem: how could the nations afford it?

King William’s administration came up with a novel answer: borrow a huge sum of money, and use taxes to pay back the interest over time. In 1694, the English government borrowed 1.2 million pounds at a rate of eight per cent, paid for by taxes on ships’ cargoes, beer, and spirits. In return, the lenders were allowed to incorporate themselves as a new company, the Bank of England. The bank had the right to take in deposits of gold from the public and—a second big innovation—to print “Bank notes” as receipts for the deposits. These new deposits were then lent to the King. The banknotes, being guaranteed by the deposits, were as good as gold money, and rapidly became a generally accepted new currency.

The Invention of Money (The New Yorker)

From this point forward, money would be circulating government debt. Plus, it’s value would be based on future revenues, as Aglietta noted above, and not just on the amount of gold and silver coins floating around.

The originality of the Bank of England was that it was not a deposit bank. Unlike for the Bank of Amsterdam, the coverage for the notes issued was very low (3 percent in the beginning). These notes, the counterparty to its loans to the state, replaced bills of exchange and became national and international means of payment for the bank’s customers.

They were not legal tender until 1833. But the securities issued by the bank, bringing interest on the public debt, became legal tender for all payments to the government from 1697 onwards. (Aglietta, pp. 136-137)

Why did the King of England have to borrow at all? Well, for a couple reasons. The power to raise taxes had been taken away from the King and given to Parliament as a consequence of the English Revolution. That revolutionary era also witnessed the inauguration goldsmith banking (such as that undertaken by John Law’s own family of goldsmiths). These goldsmith receipts were the forerunners of the banknote:

The English Civil War…broke out because parliament disputed the king’s right to levy taxes without its consent. The use of goldsmith’s safes as secure places for people’s jewels, bullion and coins increased after the seizure of the mint by Charles I in 1640 and increased again with the outbreak of the Civil War. Consequently some goldsmiths became bankers and development of this aspect of their business continued after the Civil War was over.

Within a few years of the victory by the parliamentary forces, written instructions to goldsmiths to pay money to another customer had developed into the cheque (or check in American spelling). Goldsmiths’ receipts were used not only for withdrawing deposits but also as evidence of ability to pay and by about 1660 these had developed into the banknote.

Warfare and Financial History (Glyn Davies, History of Money online)

By this time, control over money had passed into the hands of a rising mercantile class, who—thanks to the staggering wealth produced by globalized trade—possessed more wealth than mere princes and kings, but lacked the ability to write laws or to print money, which they strongly coveted. It was these merchants and “moneyed men” (often members of the Whig party in Parliament) who backed the Dutch staadtholder William of Orange’s claim to the English throne in 1688.

The banknotes began to circulate widely, displacing coins and bills of exchange. And it didn’t stop there: more money was quickly needed, and the Bank acquired more influence. Part of this was due to England being a naval—rather than an army—power. Warships require huge expenditures of capital to build. They also require a vast panoply of resources, such as wood, nails, iron, cloth, stocked provisions, and so forth; whereas land-based armies just require paying soldiers and provisions (which can be commandeered). Thus, financial means to mobilize these resources were much more likely in naval powers such as Holland and England than in continental powers like France, Austria and Spain.

This important post from the WEA Pedagogy blog uses excerpts from Ellen Brown’s Web of Debt to lay out the creation of the Bank of England, and, consequently, central banking in general (and is well-worth reading in full):

William was soon at war with Louis XIV of France. To finance his war, he borrowed 1.2 million pounds in gold from a group of moneylenders, whose names were to be kept secret. The money was raised by a novel device that is still used by governments today: the lenders would issue a permanent loan on which interest would be paid but the principal portion of the loan would not be repaid.

The loan also came with other strings attached. They included:

– The lenders were to be granted a charter to establish a Bank of England, which would issue banknotes that would circulate as the national paper currency.

– The Bank would create banknotes out of nothing, with only a fraction of them backed by coin. Banknotes created and lent to the government would be backed mainly by government I.O.U.s, which would serve as the “reserves” for creating additional loans to private parties.

– Interest of 8 percent would be paid by the government on its loans, marking the birth of the national debt.

The lenders would be allowed to secure payment on the national debt by direct taxation of the people. Taxes were immediately imposed on a whole range of goods to pay the interest owed to the Bank.

The Bank of England has been called “the Mother of Central Banks.” It was chartered in 1694 to William Paterson, a Scotsman who had previously lived in Amsterdam. A circular distributed to attract subscribers to the Bank’s initial stock offering said, “The Bank hath benefit of interest on all moneys which it, the Bank, creates out of nothing.” The negotiation of additional loans caused England’s national debt to go from 1.2 million pounds in 1694 to 16 million pounds in 1698. By 1815, the debt was up to 885 million pounds, largely due to the compounding of interest. The lenders not only reaped huge profits, but the indebtedness gave them substantial political leverage.

The Bank’s charter gave the force of law to the “fractional reserve” banking scheme that put control of the country’s money in a privately owned company. The Bank of England had the legal right to create paper money out of nothing and lend it to the government at interest. It did this by trading its own paper notes for paper bonds representing the government’s promise to pay principal and interest back to the Bank — the same device used by the U.S. Federal Reserve and other central banks today.

Note that the interest on the loan is paid, but never the loan itself. That meant that tax revenues were increasingly funneled to a small creditor class to whom the government was indebted. Today, we call such people bond holders, and they exercise their leverage over governments through the bond markets. For all intents and purposes, this system ended government sovereignty and tied the hands of even elected governments being able to spend tax money on the domestic needs of their own people. Control over the state’s money was lost forever.

An interesting couple of notes: William Paterson was, like John Law, a Scotsman—giving credence to the claim that it was the Scots who “invented Capitalism” (Adam Smith and James Watt were also Scots). It also raises the idea (to me, anyway) that the modern financial system was started by instinctive hustlers and gamblers. We’ve already referred to John Law’s expertise at the gambling tables of Europe and ability to inspire confidence in his schemes. Patterson, upon returning to Scotland, began raising funds via stock for an ambitious scheme to develop a society in Central America. This scheme ended up being on of the worst disasters in history. Not only that, but the Darien scheme collapsed so badly that Scotland’s entire financial health was devastated, and is considered to be a factor in Scotland signing the Acts of Union, politically joining with England to the south.

For an overview of the Darien scheme, see this: Scotland’s lessons from Darien debacle (BBC)

The WEA Pedagogy blog than adds some additional details:

Some more detail of interest is that the creation of Bank of England was tremendously beneficial for England. The King, no longer constrained, was able to build up his navy to counter the French. The massive (deficit) spending required for this purpose led to substantial progress in industrialization.

Quoting Wikipedia on this: “As a side effect, the huge industrial effort needed, including establishing ironworks to make more nails and advances in agriculture feeding the quadrupled strength of the navy, started to transform the economy. This helped the new Kingdom of Great Britain – England and Scotland were formally united in 1707 – to become powerful. The power of the navy made Britain the dominant world power in the late 18th and early 19th centuries”

The post then summarizes the history of the creation of central banking:

…It is in this spirit that we offer a “finance drives history” view of the creation of the first Central Bank. The history above can be encapsulated as follows:

1. Queen Elizabeth asserted and acquired the sovereign right to issue money.
2. The moneylenders (the mysterious 0.1% of that time) financed and funded a revolution against the king, acquiring many privileges in the process.
3. Then they financed and funded the restoration of the aristocracy, acquiring even more privileges in the process.
4. Finally, when the King was in desperate straits to raise money, they offered to lend him money at 8% interest, in return for creating the Bank of England, acquiring permanently the privilege of printing money on behalf of the king.

The process by which money was created by the Bank of England is extremely interesting. They acquired the debt of the King. This debt was used as collateral/backing for the money they created. The notes they issued were legal tender in England. Whenever necessary, they were prepared to exchange them for gold, at the prescribed rates. However, when the confidence of the public is high, the need for actual gold as backing is substantially reduced.

Origins of Central Banking (WEA Pedagogy Blog)

As I noted above, the importance of the Navy in the subsequent industrialization of England is often overlooked. There have been a few scholars who have argued that it was Britain’s emphasis on naval power which was a factor in England (and not somewhere else) becoming the epicenter of the Industrial Revolution. Many of its key inventions were sponsored by the government in order to more effectively fight and navigate at sea (from accurate clocks and charts to canned food). Even early mass production was prompted by the needs of the British Navy: pulley blocks were mass-produced by engineers and were one of the first items made this way via mechanization.

Just like in other countries, the needs of war caused the Bank to issue more and more notes, greatly increasing to the national debt. However, the vast profits of industrialization and colonialism were enough to support it. When convertibility was finally temporarily suspended in the mid 1800s by necessity, paper money continued to carry the trust of the public, unlike in France. Galbraith sums up the subsequent history of the Bank of England:

In the fifteen years following the granting of the original charter the government continued in need, and more capital was subscribed by the Bank. In return, it was accorded a monopoly of joint-stock, i.e., corporate, banking under the Crown, one that lasted for nearly a century. In the beginning, the Bank saw itself merely as another, though privileged, banker.

Similarly engaged in a less privileged way were the goldsmiths, who by then had emerged as receivers of deposits and sources of loans and whose operations depended rather more on the strength of their strong boxes than on the rectitude of their transactions. They strongly opposed the renewal of the Bank’s charter. Their objections were overcome, and the charter was renewed.

Soon, however, a new rival appeared to challenge the Bank’s position as banker for the government. This was the South Sea Company. In 1720, after some years of more routine existence, it came forward with a proposal for taking over the government debt in return for various concessions, including, it was hoped, trading privileges to the Spanish colonies, which, though it was little noticed at the time, required a highly improbable treaty with Spain.

The Bank of England bid strenuously against the South Sea Company for the public debt but was completely outdone by the latter’s generosity, as well as by the facilitating bribery by the South Sea Company of Members of Parliament and the government. The rivalry between the two companies did not keep the Bank from being a generous source of loans for the South Sea venture. All in all, it was a narrow escape.

For the enthusiasm following the success of the South Sea Company was extreme. In the same year that Law’s operations were coming to their climax across the Channel, a wild speculation developed in South Sea stock, along with that in numerous other company promotions, including one for a wheel for perpetual motion, one for ‘repairing and rebuilding parsonage and vicarage houses’ and the immortal company ‘for carrying on an undertaking of great advantage, but nobody to know what it is’. All eventually passed into nothing or something very near.
In consequence of its largely accidental escape, the reputation of the Bank for prudence was greatly enhanced.

As Frenchmen were left suspicious of banks, Englishmen were left suspicious of joint-stock companies. The Bubble Acts (named for the South Sea bubble) were enacted and for a century or more kept such enterprises under the closest interdict.

From 1720 to 1780, the Bank of England gradually emerged as the guardian of the money supply as well as of the financial concerns of the government of England. Bank of England notes were readily and promptly redeemed in hard coin and, in consequence, were not presented for redemption. The notes of its smaller competitors inspired no such confidence and were regularly cashed in or, on occasion, orphaned.
By around 1770, the Bank of England had become nearly the sole source of paper money in London, although the note issues of country banks lasted well into the following century. The private banks became. instead, places of deposit. When they made loans, it was deposits, not note circulation, that expanded, and, as a convenient detail, cheques now came into use. (Galbraith, 32-34)

By a complete accident, Britain was able to escape France’s fate. When the South Sea bubble popped, the Bank of England was able to reliably take up the slack and manage the government’s debt—an option that France did not have, since the central bank and the Company were all part of the same organization, and that organization had a monopoly over loans to the government, tax collection, and money creation.

Next time: An Instrument of Revolution.

The Origin of Paper Money 3

Despite paper instruments like bills of exchange having existed for centuries, for most ordinary people, money was exclusively the gold and silver coins minted by various national governments. Gold was used for high-value transactions, and silver for smaller ones. When the precious metals from the New World began flowing into Europe, the amount of coins dramatically increased, leading to a continent-wide bout of inflation.

The Spanish, the major beneficiaries of this increased money supply from silver mines of Bolivia and Mexico, used the money to purchase all sorts of things from abroad and live large. Because they became so filthy rich with very little effort (the enslaved Native Americans did all the hard work of digging out the silver), the Spanish failed to develop any domestic industries or innovate much, and thus were passed over by the more industrious Northern Europeans—much like a wealthy, spoiled heir who never learns any practical skills until the money runs out—and by then it’s too late.

There were many in Europe after 1493 who knew only distantly of the discovery and conquest of lands beyond the ocean seas, or to whom this knowledge was not imparted at all. There were few, it can be safely said, who did not feel one of its principal consequences.

Discovery and conquest set in motion a vast flow of precious metal from America to Europe, and the result was huge rise in prices – an inflation occasioned by an increase in the supply of the hardest of hard money.

Almost no one in Europe was so removed from market influences that he did not feel some consequence in his wage, in what he sold, in whatever trifling thing he had to buy.

The price increases occurred first in Spain where the metal first arrived; then, as they were carried by trade (or perhaps in lesser measure by smuggling or for conquest) to France, the Low Countries and England, inflation followed there.

In Andalusia, between 1500 and 1600, prices rose perhaps fivefold. In England, if prices during the last half of the fifteenth century, i.e. before Columbus, are taken as 100, by the last decade of the sixteenth century they were roughly at 250; eighty years later, by the decade of 1673 through 1682, they were around 350, up by three-and-a-half times from the level before Columbus, Cortez and the Pizarros. After 1680, they levelled off an subsided, as much earlier they had fallen in Spain. (Galbraith, pp. 8-9)

Prior to this era, Europe had dealt with ongoing, chronic shortages of precious metals for coins, because much of the continent’s silver leaked out through trading with the Arab world, especially after the Crusades. This is why much of the European economy remained unmonetized for so long. In fact, northern Italian bankers had invented banking and bills of exchange specifically to deal with this problem. Thus, markets in Europe remained confined to specific market towns and “ports of trade” and were subject to strict regulations by rulers. It was not a lack of desire for profits on the part of rulers, but a lack of coins that kept capitalism in embryo.

The vast increase in the money supply from New World silver and gold is what made capitalism possible in Western Europe, but that’s a story for another time.

At its peak in the early 17th century, 160,000 native Peruvians, slaves from Africa and Spanish settlers lived in Potosí to work the mines around the city: a population larger than London, Milan or Seville at the time. In the rush to exploit the silver, the first Spanish colonisers occupied the locals’ homes, forgoing the typical colonial urban grid and constructing makeshift accommodation that evolved into a chaotic mismatch of extravagant villas and modest huts, punctuated by gambling houses, theatres, workshops and churches.

High in the dusty red mountains, the city was surrounded by 22 dams powering 140 mills that ground the silver ore before it was moulded into bars and sent to the first Spanish colonial mint in the Americas. The wealth attracted artists, academics, priests, prostitutes and traders, enticed by the Altiplano’s icy mysticism. “I am rich Potosí, treasure of the world, king of all mountains and envy of kings” read the city’s coat of arms, and the pieces of eight that flowed from it helped make Spain the global superpower of the period.

Potosí: The mountain of silver that was the world’s first global city (Aeon)

How silver turned Potosí into ‘the first city of capitalism’ (The Guardian)

This price spike led to an important realization that people started to have after prices finally leveled off in the late 1600’s: the number of economic transactions (and hence the overall size of the economy and the capacity to specialize) was dependent on the amount of money in circulation. In other words, the volume of trade is determined by the amount of currency in circulation.

Today this is known as the quantity theory of money.

This newfound abundance of silver in Europe caused rising prices–the so-called “Price Revolution”. For the first time there was enough money to create a new class of people whose wealth consisted primarily of money as oppose to land: moneyed men, or the merchant caste. It also caused Spanish coins to be widely used and distributed, function as the world’s first global currency from the Americas to the Middle East to Asia:

The silver of the America made possible a world economy for the first time, as much of it was traded not only to the Ottomans but to the Chinese and East Indians as well, bringing all of them under the influence of the new silver supplies and standardized silver values. Europe’s prosperity boomed, and its people wanted all the teas, silks, cottons, coffees, and spices which the rest of the world had to offer. Asia received much of this silver, but it too experienced the silver inflation that Europe underwent. In China, silver had one-fouth the value of gold in 1368, before the discover of America, but by 1737 the ratio plummeted to twenty to one, a decline of silver to one-fifth of its former value. This flood of American silver came to Asia directly from Acapulco across the pacific via Manila in the Philippines, whence it was traded to China for spices and porcelain. (Weatherford, Indian Givers, pp. 16-17)

The so-called “Price Revolution” taught Europeans another important lesson: What constituted money didn’t change, but it’s purchasing power did. Therefore, they concluded, the value of money depended on how much of it there was in circulation, and not on some intrinsic quality. If there was a shortage of cash, it was worth a lot (i.e. it had high purchasing power). If there was a surplus, it wasn’t worth nearly as much (i.e. it had lower purchasing power). They had seen this first-hand.

In other words, the value of money had to do with how much of it there was, more than any intrinsic, magical quality. The value attributed gold and silver was merely a cultural artifact.

In fact, money had to be useless, since if it were more useful as a commodity than as money, then that’s what it would be used for, and there would be perennial shortages of currency causing the economy to contract.

This led to the following conclusions: If money has no inherent value, but was merely an expedient for spot transactions, than why not paper? But it does have to be backed by something, otherwise people will lose confidence in it. Although precious metal coins could be devalued by government edicts, their worth could never fall to zero, since there was always a commodity market for gold and silver for things like jewelry and tea sets. Precious metals tended to flow from where they were undervalued to countries where the commodity price was higher, causing perennial spot shortages throughout Europe, along with the requisite economic chaos.

The basic problem people were struggling with was that, since all money at the time was dependent on precious metals, how could you increase the supply of money without stumbling upon new sources of precious metal, as the Spanish had done? The money in circulation had to be increased—that was obvious to a growing number of people. But the low-hanging fruit of gold and silver had already been harvested. And with vast new material wealth continuing to flow into Europe from the Americas, how could the money supply be increased enough to take advantage of this?

Paper was an obvious solution. Paper had come to Europe in the Middle Ages from China. After the Black Death, many of the cotton clothes worn by the deceased were turned into pulp, which helped spread the use of paper, and indirectly drive the commercial revolution of the Middle Ages, along with innovations like Arabic numerals and double-entry bookkeeping (aka the “Venetian method”). The printing press, invented in Mainz in 1502 by Gutenberg, further enhanced the power of paper printing. But the real use of paper was in banking:

In the West, paper found its most important use as a means of keeping ledgers in banks. Long before it was used as a means of printing more money, it was used by bankers to increase the money supply. Only later did it gradually emerge as a replacment for coins in daily commerce. The initial development and circulation of monetary bills of paper came about as a side effect of banking. (Weatherford, p. 128)

Paper instruments of credit were already widely circulating throughout Europe, such as Bills of Exchange. Yet, underneath it all, money was still ultimately tied to finite amounts of precious metal. Paper checks were simply transfers of monies from one account to another, similar to giro banking in the ancient world, while Bills of exchange were:

“…essentially a written order to pay a fixed sum of money at a future date. Bills of exchange were originally designed as short-term contracts but gradually became heavily used for long-term borrowing. They were typically rolled over and became de facto short-term loans to finance longer-term projects…bills of exchange could be re-sold, with each seller serving as a signatory to the bill and, by implication, insuring the buyer of the bill against default…”

Crisis Chronicles: The Commercial Credit Crisis of 1763 and Today’s Tri-Party Repo Market (Liberty Street)

One solution was just to issue credit in excess of the amount of gold and silver stored in your vaults—the so-called “goldsmith’s trick.” This became especially common around the time of he English Revolution, where goldsmiths acted as moneylenders and bankers. As long as there was enough gold and silver sitting in the vault to cover the amount people showing up to exchange their paper, you were all right. But if more paper was redeemed than the gold and silver you had at any one point, you were doomed. This is why governments were reluctant to embrace such a solution (later, this idea would underpin fractional reserve banking).

The question ultimately boiled down to, if not gold and silver, then what would give paper money its value? And what would limit its supply? Otherwise, any enterprising printer could just print up money in any amount and give it to himself. Ultimately, the answers would come down to some sort of government authority to regulate the issuance of such bills, and back it up with the government’s credit.

One very common idea floating around in the late 1600s and early 1700s were proposals for a land bank–essentially monetizing land. Such banks wouldn’t take deposits in gold or silver; Rather, they would issue government-backed paper money securitized by mortgages on land. “In these early cases the term “bank” meant simply the collection or batch of bills of credit issued for a temporary period. If successful, reissues would lead to a permanent institution or bank in the more modern sense of the term.” After all, even if a country didn’t have gold and silver mines, it did always have land. Land was valuable, and inherently limited in supply–even moreso than gold and silver (“Buy land – they aren’t making any more if it,” said Mark Twain). This was a variant of the idea of paper money as a claim on real resources. However, the problem was much the same as with the goldsmith’s trick: what happens if you print money in excess of the underlying resources?

[I]f we look at the world through the lens of the late 17th century…[m]oney was made of metal, and there was therefore no scope for creating more money without finding new supplies of silver and gold. There were two types of wealthy individual: moneyed men and landed men.

The land bank proponents were early contributors to the economic debate. In their pamphlets the principal problem that they identified was the sluggish economy. They all agreed that the situation could be improved and saw the best means of improvement as an increase in the supply of money.

Rather than doing this as the Spanish and Portuguese did by sailing to the new world and bringing back vast quantities of precious metals, they proposed using the banking model that had succeeded in Amsterdam and Venice. According to Schumpeter, they “fully realised the business potentialities of the discovery that money – and hence capital in the monetary sense of the term – can be manufactured or created”.

Britain, which was not rich in terms of gold and silver, had plenty of potential in its land. Therefore, a land bank appeared to be a sensible suggestion. None of the land banks that were set up succeeded…

Land Bank Proposals 1650-1705 (PDF)

Land banks had already been established in the American Colonies in a limited fashion:

In 1686, Massachusetts established the first American land bank. Others soon followed.

Despite the name, these were not true banks; they did not accept deposits. Instead, they issued “banks” or notes, or “bills on loan,” to borrowers who put up land as collateral with the bank.

To fortify confidence in the notes, colonial governments promised to issue only a fixed amount of notes and for a set term and to secure their loans with collateral typically equal to twice the amount of the loan.

These notes soon became legal tender for all public and private debts. Principal and interest payments were due annually, but the bank often delayed the first principal payment for a few years. Payments had to be made in notes or in specie.

While the notes furnished a circulating currency, the interest payments provided a revenue stream to the colonial governments.

Paper Money and Inflation in Colonial America (Owen F. Humpage)

National land banks were proposed in the early 1700’s by two people who would become very influential in the history of paper money: John Law (for France) and Benjamin Franklin (for Pennsylvania). Later on, this idea would be used by the revolutionary French government to back its own paper currency called assignats. They used the land seized from the Catholic Church and some aristocrats to back the money. And there was a lot of this land—the Church owned an estimated one-fifth of all the land in France prior to the Revolution.

We can think of this as the very earliest rumblings of today’s Modern Monetary Theory (MMT). Money wasn’t gold and silver after all—rather, it was any means of exchange by which trade was conducted. The medium could be anything, so long it retained its value in exchange. What really mattered was the supply of it: that it was somewhat commensurate with the amount of economic transactions desired. The Scotsman John Law, who would establish the first paper money system in France, had seen people at the gambling tables of England using bills of exchange, stocks, bonds, banknotes, IOUs—any sort of valuable paper instrument—as de facto money in a pinch. This gave him the essential insight that any paper people believed had intrinsic value could be used as money, not just gold and silver coins:

[John] Law thought that the important thing about money wasn’t its inherent value; he didn’t believe it had any. “Money is not the value for which goods are exchanged, but the value by which they are exchanged,” he wrote. That is, money is the means by which you swap one set of stuff for another set of stuff. The crucial thing, Law thought, was to get money moving around the economy and to use it to stimulate trade and business.

As Buchan writes, “Money must be turned to the service of trade, and lie at the discretion of the prince or parliament to vary according to the needs of trade. Such an idea, orthodox and even tedious for the past fifty years, was thought in the seventeenth century to be diabolical.”

The Invention of Money (The New Yorker)

What was undeniable was that the growing economies of the North Atlantic needed more money, and lots of it; far in excess of what any gold and silver mines anywhere in the world could reasonably provide.

Next: The first (Western) paper money

The Origin of Paper Money 2

When it comes to paper money in the West, the foremost innovator was the United States, as John Kenneth Galbraith points out:

If the history of commercial banking belongs to the Italians and of central banking to the British, that of paper money issued by a central government belongs indubitably to the Americans. (Galbraith, p. 45)

The reason the American colonies had to experiment with paper money was simple: “official” money in the American Colonies was gold and silver coins, and there was a perennial shortage of such coins.

The American colonies had no rich deposits of gold of silver, unlike the Spanish in Latin America. There were no mines, and, to make things worse, there no mints allowed in North America. And, to top it all off, the British government forbade the colonies from chartering banks, “Thus bank notes, the obvious alternative to government notes, were excluded.” (Galbraith, p. 47). Colonists used whatever coins they could get their hands on, most of which came from the Spanish colonies to the south. In particular, this meant the Spanish Peso de Ocho Reales, or Piece of Eight: the world’s first global currency. This was also the origin of the famed dollar $ign. Foreign coins would continue to circulate as money in the United States until after the Civil War.

The curious origin of the dollar symbol (BBC)

Since the colonies couldn’t mint their own coins, if you wanted to get your hands on gold and silver coins, you had no other choice but to trade with the outside world. If you didn’t trade with the outside world, then getting sufficient coins was really difficult, severely limiting internal trade. This wasn’t accidental—the British, like all colonial powers, wanted the colonies to be sources of raw materials for their domestic manufacturing industries, and not to be economically self-sufficient.

To help alleviate the ongoing shortage of previous metal coins, local authorities might have passed laws to restrict the export of gold and silver–what we would today call capital controls—but such laws were expressly forbidden by the British government. In the mercantilist world of the 1600-1700s, the strength of a nation lay in the amount of gold and silver stashed away in its vaults—probably a holdover from the time when gold and silver paid for mercenaries in Europe before the era of professional standing armies.

And so there was a perennial, ongoing shortage of currency for transactions. This was an anchor around the leg of the domestic economy of the colonies.

…the British colonies in North America suffered from a constant shortage of all coins. The mercantile policies then in vogue in London sought to increase the amount of gold and silver money in Britain and to do whatever was practical in order to prohibit its export, even to its own colonies.

Beginning in 1695, Britain forbade the export of specie to anywhere in the world, including to its own colonies. As a result, the American colonies were forced to use foreign silver coins rather than British pounds, shillings, and pence, and they found the greatest supply of coins in the neighboring Spanish colony of Mexico, which operated one of the world’s largest mints.

Because of the great wealth produced in Mexico and Peru, Spanish coins became the most commonly accepted currency in the world…The most common Spanish coin in use in the British colonies in 1776 was the pillar dollar, so named because the obverse side showed the Eastern and Western hemispheres with a large column on either side.

In Spanish imperial iconography, the columns represented the Pillars of Hercules, or the narrow strait separating Spain from Morocco and connecting the Mediterranean with the Atlantic. A banner hanging from the columns bore the words plus ultra, meaning “more beyond.” The Spanish authorities began issuing this coin almost as soon as they opened the mint in Mexico with the intent of publicizing the discovery or America, which was the plus ultra, the land out beyond the Pillars of Hercules.

Some people say that the modern dollar sign is derived from this pillar dollar. According to this explanation, the two parallel lines represent the columns and the S stands for the shape of the banner hanging from them. Whether the sign was inspired by this coin or not, the pillar dollar can certainly be called the first American silver dollar. (Weatherford, pp. 117-118)

Another thing the colonists did to get around this chronic shortage of metal coins was barter, which led to settling accounts with all sorts of things other than previous metal coins. They might settle accounts, for example, with so-called “county pay” or “country money,” typically cash crops: cod, tobacco, rice, grain, cattle, indigo, whiskey, brandy–whatever was at hand. During 1775 in North Carolina as many as seventeen different forms of money were declared to be legal tender.

Without the convenience of money, colonists resorted to many less-efficient methods of trading. Barter, of course, was common, particularly in rural areas, but individuals often had to accept goods that they did not particularly need or want only because they had no other way to complete a transaction. They accepted these goods hoping to pass them on in future trades. Some items, most famously tobacco in Virginia and Maryland, worked well in this way and became commodity monies directly or as backing for warehouse receipts. Various other types of warehouse receipts, bills of exchange against deposits in London, and individuals’ promissory notes might also circulate as money. In addition, shopkeepers and employers sometimes issued “shop notes,” a type of scrip—often in small denominations—redeemable at a specific store.

Out of necessity, merchants and wealthy individuals frequently extended credit to others. In an economy that depended heavily on barter, however, one could end up holding debts against many individuals and across a broad array of goods. People naturally hoped to net out some of these debts, but this is extremely difficult under barter. Fortunately, colonial creditors could tally debts in British pounds or colonial currencies even if these currencies were not readily available. In this way, money acted as a unit of account. By attaching a value to things, money accommodated the netting out of debts.

Paper Money and Inflation in Colonial America (Cleveland Fed)
One of the most popular substitutes in North America could be obtained domestically: beads made from marine sea shells called wampum, which were used extensively in the tribute economy of the the Iroquois nations. Wampum is a member of the huge amount of currencies all over the globe that were made from sea shells, including cowrie shells and dentalium. Since these were regarded as valuable by Native American tribes, they had the added advantage of being able to be traded for animal pelts bagged by the Native Americans (who soon stripped the forest bare in order to get more wampum—and hence more prestige). In 1664 Pieter Stuyvesant arranged a loan in wampum worth over 5,000 guilders for paying the wages of workers constructing the New York citadel. They were even subject to a form of counterfeiting:

The first substitute was taken over from the the Indians. From New England to Virginia in the first years of settlement, the wampum or shells used by the Indians became the accepted small coinage. In Massachusetts in 1641, it was made legal tender, subject to some limits as to the size of the transaction, at the rate of six shells to the penny.

However, within a generation or two it began to lose favor. The shells came in two denominations, black and white, the first being double the value of the second. It required by small skill and a smaller amount of dye to convert the lower denomination of currency into the higher.

Also, the acceptability of wampum depended on its being redeemed by the Indians in pelts. The Indians, in effect, were the central bankers for the wampum monetary system, and beaver pelts were the reserve currency into which the wampum could be converted. This convertibility sustained the purchasing power of the shells.

As the seventeenth century passed and settlement expanded, the beavers receded to the evermore distant forests and streams. Pelts ceased to be available; wampum ceased, accordingly, to be convertible and thus, in line with expectation, it lost in purchasing power. Soon it disappeared from circulation except as small change. (Galbraith, pp. 47-48)

Another very popular domestic currency in use was tobacco leaf. In fact, tobacco’s reign as currency in America lasted longer than gold’s:

Tobacco, although regionally more restricted, was far more important than wampum. It came into use as money in Virginia a dozen years after the first permanent settlement in Jamestown in 1607. Twenty-three years later, in 1642, it was made legal tender by the General Assembly of the colony by the interestingly inverse device of outlawing payments that called for payment in gold or silver.

The use of tobacco money survived in Virginia for nearly two centuries and in Maryland for a century and a half – in both cases until the Constitution made money solely the concern of the Federal government. The gold standard, by the common calculation, lasted from 1879 until the cancellation of the final attenuated version by Richard Nixon in 1971. Viewing the whole span of American history, tobacco, though more confined as to region, had nearly twice as long a run as gold. (Galbraith, p. 48)

And such practices might be where Adam Smith came up with his erroneous notion of primitive barter economies, which continues to plague economics and economic history to this day.

Early American Colonists Had a Cash Problem. Here’s How They Solved It (Time)

This illustrates another dictum about money: barter tends to occur in fully monetized market economies where the medium of exchange is in short supply. This is because internal exchanges in market economies take the form spot transactions among anonymous competing strangers. Anthropologists now know that pre-monetary economies were embedded in social relations and took the forms of reciprocity, redistribution, householding, and ceremonial exchange, rather than constant efforts to “truck, barter and exchange.” Anthropologists have never found an example of a barter economy anywhere in the world (e.g. “I’ll give you ten chickens for that cow”).

People in North America and other remote regions were using things like cod, tobacco, grain, brandy, and shells to settle accounts, sure—but these were fully monetized economies that just happened to have a chronic shortage of coins! To get around this, certain items which were particularly valuable because they could be traded with the outside world—like cod in Newfoundland, or tobacco in Virginia, were used to settle accounts. Or, because some items were particularly valuable inside the community, they could be used in subsequent trades as a medium of exchange (like iron nails in Scotland, another Smith example). One might include the “cigarette money” used in prisons in this category. A contemporary example is the use of spruce tips in remote Alaskan towns: spruce tips can only be harvested during a few weeks in the spring and are used in all sorts of exported products (beer, tea, soap, etc.) that are traded with the outside world.

A year after moving to Skagway, Alaska, John Sasfai walked into Skagway Brewing Co. and ordered the signature Spruce Tip Blonde Ale. But instead of pulling out his wallet, the guide for Klondike Tours put a sack of spruce tips on the bar to pay his tab. That’s because in this town, the bounty he foraged from trees near Klondike Gold Rush National Historical Park serves as a currency.

This village, with a year-round population just shy of 1,000, is notably remote – it’s about 100 miles north of Juneau and 800 miles south-east of Anchorage by car. And though stampeders established Skagway during the late-19th-Century gold rush, these days the nuggets of value are plucked from the forest, not panned or mined. While spruce tips – the buds that develop on the ends of spruce tree branches – are only good for cash at Skagway Brewing Co., bartering with spruce tips for food, firewood or coffee (which are delivered by barge once a week) is not uncommon.

The Alaska town where money grows on trees (BBC)

However, in all of Smith’s cases, prices were denominated in standard units of account, but people settled their debts in whatever was at hand. But none of these things were the origin of prices and money, as Smith incorrectly claimed.

To start, with Adam Smith’s error as to the two most generally quoted instances of the use of commodities as money in modern times, namely that of nails in a Scotch village and that of dried cod in Newfoundland, have already been exposed [as fraudulent] … and it is curious how, in the face of the evidently correct explanation … Adam Smith’s mistake has been perpetuated.

In the Scotch village the dealers sold materials and food to the nail makers, and bought from them the finished nails the value of which was charged off against the debt. The use of money was as well known to the fishers who frequented the coasts and banks of Newfoundland as it is to us, but no metal currency was used simply because it was not wanted.

In the early days of the Newfoundland fishing industry there was no permanent European population; the fishers went there for the fishing season only, and those who were not fishers were traders who bought the dried fish and sold to the fishers their daily supplies. The latter sold their catch to the traders at the market price in pounds, shillings and pence, and obtained in return a credit on their books, with which they paid for their supplies. Balances due by the traders were paid for by drafts on England or France.

A moment’s reflection shows that a staple commodity could not be used as money, because ex hypothesi, the medium of exchange is equally receivable by all members of the community. Thus if the fishers paid for their supplies in cod, the traders would equally have to pay for their cod in cod, an obvious absurdity. In both these instances in which Adam Smith believes that he has discovered a tangible currency, he has, in fact, merely found—credit.

Then again as regards the various colonial laws, making corn, tobacco, etc., receivable in payment of debt and taxes, these commodities were never a medium of exchange in the economic sense of a commodity, in terms of which the value of all other things is measured. They were to be taken at their market price in money. Nor is there, as far as I know, any warrant for the assumption usually made that the commodities thus made receivable were a general medium of exchange in any sense of the words. The laws merely put into the hands of debtors a method of liberating themselves in case of necessity, in the absence of other more usual means. But it is not to be supposed that such a necessity was of frequent occurrence, except, perhaps in country districts far from a town and without easy means of communication.

What is money? (Alfred Mitchell-Innes)

All of this experience showed colonists that multiple things could be used as money, if needed. There was no more magic to a gold standard, then to a cowrie standard, or a tobacco standard, a grain standard, or a cattle standard, or anything else for that matter. This would prove to be an instrumental lesson in the creation of paper money in the colonies.

Galbraith, for his part, gives an alternative explanation for the chronic lack of precious metals in the American colonies:

Many countries or communities had gold and silver in comparative abundance without mines. Venice, Genoa, Bruges had no Mother Lode (Nor today does Hong Kong or Singapore.) While the colonists were required to pay in hard coin for what they brought from Britain, they also had products – tobacco, pelts, ships, shipping services – for which British merchants would have been willing, and were quite free, to expend gold and silver.

Much more plausibly, the shortage of hard money in the colonies was another manifestation of Gresham. From the very beginning the colonists experimented with substitutes for metal. The substitutes, being less well regarded than gold or silver, were passed on to others and this were kept in circulation. The good gold or silver was kept by those receiving it or used for those purchases, including those in the mother country, for which the substitutes were unacceptable. (p. 47)

So the colonists were forced by economic necessity to experiment with paper money, and that’s why the United States is the cradle of rolling out this innovation. As Galbraith notes of the above cases, “None of these substitutes was important as compared with paper money.” (Galbraith p. 51).

Next: Europe rethinks money

The Origin of Paper Money 1

Where did paper money come from? That’s the question behind this article from The New Yorker: The Invention of Money. It’s a review of recent biographies of John Law and Walter Bagehot. The author concludes:

The present moment in financial invention therefore has some similarities with the period when money in the form we currently understand it—a paper currency backed by state guarantees—was first created. The hero of that origin story is the nation-state. In all good stories, the hero wants something but faces an obstacle. In the case of the nation-state, what it wants to do is wage war, and the obstacle it faces is how to pay for it.

At the same time, I’ve been reading a few popular books on monetary history. One is Jack Weatherford’s The History of Money. Weatherford, best known for his books about Genghis Khan, is eminently readable, and hits most of the major developments. However, he is clearly in the Ron Paul school of economics: gold alone is money, governments are profligate and can’t be trusted, free banking is good, central banks are bad, etc. There are also a number of basic factual errors in the book, which leads me to recommend it only if you take it as a brief survey that gets many things wrong and is a bit outdated.

Weatherford’s major reference for his chapter on paper money is John Kenneth Galbraith’s: Money, Whence It Came and Where It Went. So I decided to go directly to the source. Galbraith, a lauded economist, has a view that is much more authoritative and nuanced than Weatherford’s. Galbraith’s book concentrates mainly on the origins of banking and the modern money system, and not so much on the deep history of money in the ancient world or the Medieval period.

I’d like to take these (and others) and give an account of how the money system works today. While Modern Monetary theory is a good descriptor of how money works in nation states in the present, it often doesn’t describe how that system initially came about, and what makes it so radically different from how the money system functioned in ancient economies.

But first, I’d like to say a few brief words on why any of this matters.

Like it or not, money runs the world. If you want to understand how the world works—and how to change it—it’s important to know how the systems comprising it work. Money may seem like a boring topic (sorry!), but I would argue that no knowledge is more fundamental and useful for trying to make things marginally better. I can’t tell you how many people I’ve met who call themselves “Socially liberal but fiscally conservative.” And what do they mean by “fiscally conservative?” Nine times out of ten, it’s this: money is inherently scarce; debt is evil; and government budgets should be balanced down to the penny. You also have libertarian Bitcoin cranks, who are convinced that algorithms will save mankind once the state somehow withers away. These views are extraordinarily resistant to any kind of challenge, almost as if they were a de facto religion (in fact, they are probably even more resistant to rational analysis that most people’s religious faith!) Such people would be amenable to a more progessive message if not for the universal brainwashing about what money is, and what it does. History can provide a useful guide.

China’s False Start

All paper money all rests on the same fundamental basis: they are circulating IOU’s. The name of the creditor backing them and what’s used to securitize them changes over time, however. Sometimes it’s a particularly reputable member of the community. Sometimes it’s a king or other ruler. Sometimes it’s a democratically-elected government–or more precisely, the future anticipated revenues of that government. Sometimes it’s backed by something tangible, like silver, gold, or real estate (the most common options). Sometimes it’s not. Nowadays, sovereign money is usually backed by the government’s ability to redistribute and to impose binding liabilities on its citizens  (and, by extension, it’s monopoly on the use of legitimate force).

Paper money began where papermaking began: in China. The usual sources were hemp and mulberry bark, and printing blocks were made of wood or metal. Because of China’s strong imperial state structure, centrality, and geographic reach, it could command officially stamped pieces of paper to be accepted by its citizens as currency in lieu of precious metals. The story is told in this excellent podcast by Tim Harford on the origins of paper money: Paper Money (50 Things that Made the Modern Economy)

In Harford’s telling, paper money begins in Sichuan province, where iron coins were used rather than gold and silver in order to keep specie from leaking out of China to the hostile territories surrounding China, such as those of the Jurchen. Iron coins had holes in the middle and were carried around on cords, called cash.
The problem, as you might expect, was that these strings of heavy iron coins were extremely cumbersome. You would be turning over larger weights of coins that the weight of the things you were trying to buy: 10 pounds of coins for a five pound chicken, or something like that.

Sichuan’s iron currency suffered from serious deficiencies. The low intrinsic value of iron coins, worth no more than a tenth of the equivalent amount of bronze coin, imposed a great burden on merchants who needed to convey their purchasing capital from one place to another, and on ordinary consumers as well. A housewife would have to bring a pound and a half of iron coin to the marketplace to buy a pound of salt, and a merchant from the capital would receive ninety-one and a quarter pounds of iron coin in exchange for an ounce of silver.

Of course, the inconvenience of transporting low-value coin affected bronze currency as well. In the early ninth century, the Tang government created depositories at its capital of Chang’an where merchants could deposit bronze coin in return for promissory notes (known as feiqian, or “flying cash”) that could be redeemed in provincial capitals. “Flying cash” was especially popular among tea merchants who wished to return their profits from the sale of tea in the capital to the distant tea-growing areas of southeastern China. The Song dynasty continued this practice under the rubric of “convenient cash” (bianqian), accepting payments of gold, silver, coin, or sil in return for notes denominated in bronze coin. (The Origins of Value, pp. 67-68)

In the mid-990s, Sichuan was captured by rebels (partly angered by depreciating currency), who shut down the mint. It remained shut even after the government regained control of the province. This prompted some private merchants to issue their own paper bills to compensate for the acute shortage of coins. Such bills represented debt—the debt of the private merchant, of course. These bills soon began to circulate, and people began using them in place of iron coins, as Harford describes:

Instead of carrying around a wagonload of iron coins, a well-known and trusted merchant would write an IOU, and promise to pay his bill later when it was more convenient for everyone.

That was a simple enough idea. But then there was a twist, a kind of economic magic. These “jiaozi”, or IOUs, started to trade freely. Suppose I supply some goods to the eminently reputable Mr Zhang, and he gives me an IOU. When I go to your shop later, rather than paying you with iron coins – who does that? – I could write you an IOU.

But it might be simpler – and indeed you might prefer it – if instead I give you Mr Zhang’s IOU. After all, we both know he’s good for the money. Now you, and I, and Mr Zhang, have together created a kind of primitive paper money – it’s a promise to repay that has a marketable value of its own – and can be passed around from person to person without being redeemed.

This is very good news for Mr Zhang, because as long as people keep finding it convenient simply to pass on his IOU as a way of paying for things, Mr Zhang never actually has to stump up the iron coins. Effectively, he enjoys an interest-free loan for as long as his IOU continues to circulate. Better still, it’s a loan that he may never be asked to repay.

No wonder the Chinese authorities started to think these benefits ought to accrue to them, rather than to the likes of Mr Zhang. At first they regulated the issuance of jiaozi, but then outlawed private jiaozi and took over the whole business themselves. The official jiaozi currency was a huge hit, circulating across regions and even internationally. In fact, the jiaozi even traded at a premium, because they were so much easier to carry around than metal coins.

How Chinese mulberry bark paved the way for paper money (BBC)

Over the next ten years, these “exchange bills” became important in China’s intraregional trade, but the problem of bogus private bills issued by unscrupulous traders remained an ongoing problem for government officials. There were growing calls for government to get more involved in the circulation of bills. Enter the new prefect of Chengdu, one Zhang Yong. He issued a series of government reforms to address this problem in 1005. He:

1.) reopened Sichuan’s mints and introduced a new large iron coin that was equivalent to ten small iron coins, or two bronze coins;

2.) restricted the right to issue exchange bills to a consortium of sixteen merchant houses in Chengdu that were known to have sufficient financial resources to back the bills up, and;

3.) standardized the bills by mandating that they be issued in a specified size, color and format, using government-supplied labor and materials (although merchants could add their own watermark).

There were no standard denominations; rather, the merchants ascribed the value of the note in ink as needed. A three percent fee was charged for cashing in the notes. There was no limit on the number of bills issued. The amount of bills in circulation tended to vary with the seasons: more bills were issued in the early summer when new silk reached the market and in the fall during the rice harvest.

There were still problems with the paper currency, however, such as counterfeiting and overissuance of bills without sufficient backing. In 1024 under a new governor, Xue Tian, the government took over the issuance of jiaozi. A state-run Jiaozi Currency Bureau was established in Chengdu and given exclusive rights to issue jiaozi. The bills had the same format, but were issued in fixed denominations: one and ten guan. Most significantly, the bills had an expiration date of two years, exchangeable for fresh ones, giving the government a modicum of control over the amount issued and preventing the counterfeiting of worn or outdated bills. Also, quotas were established for the issue of the currency. Tea merchants engaged in intraregional and international trade were the most enthusiastic users of the currency, as it eliminated the need to transport heavy coins and prevented robbery by bandits (note that the needs of traveling merchants were also instrumental in the creation of Bills of Exchange issued by banks in medieval Europe centuries later).

Yet there were still problems. The government issued notes to procure military supplies from the merchants; and the ongoing costs of wars on the frontier led to their overissue. Plus, a new emperor nationalized the tea industry, meaning that the major consumers of jiaozi—the tea merchants—no longer had as much use for them. This loss of demand alongside oversupply caused a sharp depreciation in the value of the currency in the market. Instead of trading at a ten percent premium, the bills were now accepted at a ten percent discount. In 1107 the government issued a new paper currency—the qianyin—at a rate of 1:4 to the old, depreciating the earlier jiaozi bills in effort to reduce the supply.

The rest of the history of China’s bills is basically a cycle of the same thing: issuing new bills, overspending due to military needs on the frontier, rampant counterfeiting, bills depreciating, demonetizing old notes, new dynasties issuing new bills, etc. Bills were still in use in trade when Marco Polo vistied China. This is the description from the fourteenth century by the Arab Traveller ibn Battuta:

The Chinese use neither [gold] dinars nor [silver] dirhams in their commerce. All the gold and silver that comes into their country is cast by them into ingots, as we have described. Their buying and selling is carried on exclusively by means of pieces of paper, each of the size of the palm of the hand, and stamped with the sultan’s seal. Twenty-five of these pieces of paper are called a balisht, which takes the place of the dinar with us [as the unit of currency]

This demonstrates some of the essential dictums of Modern Monetary Theory.

The first is Hyman Minsky’s dictum: Anyone can create money, the secret is in getting it accepted.

The second is Felix Martin’s definition of money: Money is tradeable debt.

The other is the observation that that: The credit that is bears highest reputation is typically that of the sovereign. Gresham’s Law being what it is, this usually means that sovereign’s money will drive out all competitors, as we’ll see much later in the United States during the Civil War.

As a reminder, Gresham’s Law is this: Bad money drives out good, or perhaps, more accurately, people spend “lesser” money if they can, and hoard “greater” money for themselves.

Gresham’s Law…is perhaps the only economic law that has never been challenged, and for the reason that there has never been a serious exception. Human nature may be an infinitely variant thing. But it has it’s constants. One is that, given a choice, people keep what is best for themselves, i.e. for those whom they love the most. (Galbraith, p. 8)

A similar rationale led to the establishment of banks and banking in Northern Europe during the Age of Sail. You deposited coins and got a receipt for the amount of coins stashed in the vault. These receipts could be used to pay for things, with the value equivalent to the coins traded (in fact, the notes were more valuable, since they couldn’t be melted down or devalued).

A final interesting note: overissuance of paper currencies and lavish spending by the Yongle emperor Zhu Di (on wars, but also notably on the Chinese treasure ship voyages) led to China going back onto a silver standard just in time for the European discovery and conquest of the New World. The Chinese demand for silver is what fueled the European trade with the Far East, since the Europeans had nothing else that the Chinese wanted to exchange for goods like silks and porcelain. Without that silver standard, who knows what would have happened?

The sizable deficits incurred by Yongle’s costly foreign expeditions, including the famous maritime explorations of Admiral Zheng He and his fleet, and the emperor’s decision to relocate the Ming capital from Nanjing to Beijing were abated, albeit temporarily, by printing more money. Finally, in the 1430s, the Ming yielded to economic realities, abandoning its paper currency and capitulating to the dominance of silver in the private economy. The Ming state gradually converted its most import and sources of revenue payments in silver, while suspending emission of paper money and minting in bronze coin.

Though still uncoined, silver prevailed as the monetary standard of the Ming and subsequent Qing dynasty (1644-1911), fueled from the sixteenth century onward by the import of vast quantities of foreign silver from Japan and the Spanish colonies in the Americas. In times of fiscal crisis, such as on the eve of the fall of the Ming dynasty in 1644 and during the worldwide depression of the 1830s to 1840s, appeals to restore paper currency were renewed, but ignored. In the nineteenth century private banks, both Chinese and foreign, began to issue negotiable bills, but the weakness of the central government after its defeat in the Opium War precluded the emergence of a unified currency…Not until 1935, under the Republic of China, did China once again have a unified system of paper currency. (The Origins of Value, p. 87-89)

Although paper money first originated in China, the paper money we use today has no direct lineage with these systems. Government-issued paper money was invented independently in Western Europe, and under very different circumstances. We’ll take a look at that next time.

Despite the importance of paper money in Chinese history, the modern world system of paper money did not develop in China, or even in the Mediterranean homeland of Marco Polo or ibn-Batuta. It evolved in the trading nations around the North Atlantic. (Weatherford, p. 129)