The Origin of Paper Money 6

1. France

France ended up conducting its own monetary experiment with paper money at around the same time as the American colonies in the early 1700s. Unlike the American experiment, it was not successful. It would be initiated by an immigrant Scotsman fleeing a murder charge (and gambling addict) by the name of John Law. (Jean Lass in French).

At this time—the early 1700’s—France was having much the same conversation around the money supply as in the Anglo-Saxon world. There, the problem was not so much a shortage of  coins, but an excess of sovereign debt due to the wild spending sprees of France’s rulers on foreign wars and luxury lifestyles.

Despite being probably the most wealthy and powerful nation in Western Europe, France’s debts (really, the King’s debts) exceeded its assets by quite a bit, at least on paper. The country struggled to raise enough funds via its antiquated and inefficient feudal tax system to pay the interest on its bonds; France’s debt traded in secondary markets as what we might today call junk bonds (i.e. low odds of repayment).

Louis XIV, having lived too long, had died the year before Law’s arrival. The financial condition of the kingdom was appalling: expenditures were twice receipts, the treasury was chronically empty, the farmers-general of the taxes and their horde of subordinate maltôtiers were competent principally in the service of their own rapacity.

The Duc de Saint-Simon, though not always the most reliable counsel, had recently suggested that the straightforward solution was to declare national bankruptcy – repudiate all debt and start again. Philippe, Duc d’Orleans, the Regent for the seven-year-old Louis XV, was largely incapable of thought or action.

Then came Law. Some years earlier, it is said, he had met Philippe in a gambling den. The latter ‘had been impressed with the Scotsman’s financial genius.’ Under a royal edict of 2 May 1716, law, with his brother, was given the right to establish a bank with capital of 6 million livres, about 250,000 English pounds…
(Galbraith, pp. 21-22)

…The creation of the bank proceeded in clear imitation of the already successful Bank of England. Under special license from the French monarch, it was to be a private bank that would help raise and manage money for the public debt. In keeping with his theories on the benefits of paper money, Law immediately began issuing paper notes representing the supposedly guaranteed holdings of the bank in gold coins.

Law’s…bank that took in gold and silver from the public and lent it back out in the form of paper money. The bank also took deposits in the form of government debt, cleverly allowing people to claim the full value of debts that were trading at heavy discounts: if you had a piece of paper saying the king owed you a thousand livres, you could get only, say, four hundred livres in the open market for it, but Law’s bank would credit you with the full thousand livres in paper money. This meant that the bank’s paper assets far outstripped the actual gold it had in store, making it a precursor of the “fractional-reserve banking” that’s normal today. Law’s bank had, by one estimate, about four times as much paper money in circulation as its gold and silver reserves…

The new paper money had an attractive feature: it was guaranteed to trade for a specific weight of silver, and, unlike coins, could not be melted down or devalued. Before long, the banknotes were trading at more than their value in silver, and Law was made Controller General of Finances, in charge of the entire French economy.

The Invention of Money (The New Yorker)

It’s also worth noting that banknotes were denominated in the unit of account, unlike coins which typically were not. Coins’ value usually fluctuated against the unit of account (what prices were expressed in), sometimes by the day. What a silver sovereign or gold Louis d’Or was worth on one day might be different that the next, especially since the monarchs liked to devalue the currency in order to decrease the amount of their debts. However, if you brought, say, 10 livres, 18 sous worth of coins to Law’s bank, the paper banknote would be written up for the equivalent amount the coins were worth at that time: 10 livres, 18 sous.

By buying back the government’s debt, Law was able to “retire” it. Thus, the money circulating was ultimately backed by government debt (bonds), just like our money today. Law’s promise to redeem the notes for specie gave users the confidence to use them. Later on, the government will decree the notes of the Banque Generale as the “official” money to be used in payment of taxes and settlement of all debts, legitimizing their value by fiat. Law later attempted to sever the link to gold and silver by demonetizing the latter. He was not successful; paper money was far too novel at the time for people to trust its value in the absence of anything tangible backing it.

Not much of what transpired was that unusual for today, but it was pretty radical for the early 1700s. Had Law stopped at this point, it’s likely that all of this would have been successful, as Galbraith points out:

In these first months, there can be no doubt, John Law had done a useful thing. The financial position of the government was eased. The bank notes loaned to the government and paid out by it for its needs, as well as those loaned to private entrepreneurs, raised prices….[and] the rising prices…brought a substantial business revival.

Law opened branches of his bank in Lyons, La Rochelle, Tours, Amiens and Orleans; presently, in the approximate modern language, he went public. His bank became a publicly chartered company, the Banque Royale.

Had Law stopped at this point, he would be remembered for a modest contribution to the history of banking. The capital in hard cash subscribed by the stockholders would have sufficed to satisfy any holders of notes who sought to have them redeemed. Redemption being assured, not many would have sought it.

It is possible that no man, having made such a promising start, could have stopped…
(Galbraith, pp. 22-23)

Trading government debt for paper money helped lower the government’s debts, but on paper, France’s liabilities still exceeded its assets. But it had one asset that had not yet been monetized—millions of acres of land on the North American continent. So Law set out to monetize that land by turning it into shares in a joint-stock company called the Mississippi Company (Compagnie d’Occident). The Mississippi Company had a monopoly on all trading with the Americas. Buying a share in the company meant a cut of the profits (i.e. equity) of trading with North America.

The first loans and the resulting note issue having been visibly beneficial – and also a source of much personal relief – the Regent proposed and additional issue. If something does good, more must do better. Law acquiesced.

Sensing the need, he also devised a way of replenishing the reserves with which the Banque Royale backed up its growing volume of notes. Here he showed that he had not forgotten his original idea of a land bank.

His idea was to create the Mississippi Company to exploit and bring to France the very large gold deposits which Louisiana was thought to have as subsoil. To the metal so obtained were also to be added the gains of trade. Early in 1719, the Mississippi Company (Compagnie d’Occident), later the Company of the Indies, was gives exclusive trading privileges in India, China and the South Seas. Soon thereafter, as further sources of revenue, it received the tobacco monopoly, the right to coin money and the tax farm. (Galbraith, p. 23)

Law—or the Duc d’Arkansas as he was now known—talked up the corporation so well that the value of the shares skyrocketed—probably the world’s very first stock bubble (but hardly the last). Gambling fever was widespread and contagious, as the desire to get rich by doing nothing is a human universal. The term “millionaire” was coined. Law took advantage of the inflated share price to buy back more of the government’s debt. And the money to buy the shares at the inflated prices was printed by the bank itself. Knowing that there was far more paper than gold and silver to back it in the kingdom, Law then tried to break the link between paper money and specie by demonetizing gold and silver; at one point making it illegal to even hold precious metals.

He was unsuccessful. Paper money was still too new, and people were unwilling to trust it without the backing of previous metal, causing a loss of faith in the currency. Later suspensions of convertibility were done after generations of paper money use. Law’s entire scheme (from origin to collapse) took place over the course of less than a year.

[Law] funded the [Mississippi] company the same way he had funded the bank, with deposits from the public swapped for shares. He then used the value of those shares, which rocketed from five hundred livres to ten thousand livres, to buy up the debts of the French King. The French economy, based on all those rents and annuities and wages, was swept away and replaced by what Law called his “new System of Finance.”

The use of gold and silver was banned. Paper money was now “fiat” currency, underpinned by the authority of the bank and nothing else. At its peak, the company was priced at twice the entire productive capacity of France…that is the highest valuation any company has ever achieved anywhere in the world.

Galbraith and Weatherford summarize the shell game that Law’s “system” ended up becoming:

To simplify slightly, Law was lending notes of the Banque Royale to the government (or to private borrowers) which then passed them on to people in payment of government debts or expenses. These notes were then used by the recipients to buy stock in the Mississippi Company, the proceeds from which went to the government to to pay expenses and pay off creditors who then used the notes to buy more stock, the proceeds from which were used to meet more government expenditures and pay off more public creditors. And so it continued, each cycle being larger than the one before. (Galbraith, p. 24)

The Banque Royale printed paper money, which investors could borrow in order to buy stock in the Mississippi company; the company then used the new notes to pay out its bogus profits. Together the Mississippi Company and the Banque Royale were producing paper profits on each other’s accounts. They bank had soon issued twice as much paper money as there was specie in the whole country; obviously it could no longer guarantee that each paper note would be redeemed in gold. (Weatherford, p. 131)

Such a scheme couldn’t last, of course. Essentially the entire French economy—its central bank, its money supply, its tax system, and the monopoly on land in North America—were in the hands of one single, giant conglomerate run by one man. That meant that when one part of the system failed, all the rest went down like ascending mountain climbers roped together.

Because the central bank owned the Mississippi company, it had an incentive to loan out excess money to drive the share price up—in other words, to inflate a stock bubble based on credit. This is always a bad idea. Finally, Law’s exaggeration of the returns on investments in the Mississippi Company inflated expectations far beyond what was realistic.

The popping of the Mississippi stock bubble, followed by a run on the bank, was enough to bring the whole thing crashing down.

People started to wonder whether these suddenly lucrative investments were worth what they were supposed to be worth; then they started to worry, then to panic, then to demand their money back, then to riot when they couldn’t get it.

Gold and silver were reinstated as money, the company was dissolved, and Law was fired, after a hundred and forty-five days in office. In 1720, he fled the country, ruined. He moved from Brussels to Copenhagen to Venice to London and back to Venice, where he died, broke, in 1729.

The Invention of Money (The New Yorker)

As Law must have known, if you gamble big, sometimes you lose big.

Some of the death of the Bank was murder, not suicide. As part of his System, one of Law’s initiatives was to simplify and modernize the inefficient and antiquated French tax system. Taxes were collected by tax farmers (much as in ancient Rome), and Law threatened to overturn their apple cart. He also attempted to end the sale of government offices to the highest bidder. This made him a lot of enemies among the moneyed classes, who thrived on graft and corruption. Such influential people (notably the financiers the Paris brothers), were instrumental in the run on the bank and the subsequent loss of confidence in the money system:

[Law] set about streamlining a tax system riddled with corruption and unnecessary complexity. As one English visitor to France in the late seventeenth century observed. “The people being generally so oppressed with taxes, which increase every day, their estates are worth very little more than what they pay to the King; so that they are, as it were, tenants to the Crown, and at such a rack rent that they find great difficulty to get their own bread.” The mass of offices sold to raise money had caused one of Louis XIV’s ministers to comment, “When it pleases Your Majesty to create an office, God creates a fool to purchase it.” There were officials for inspecting the measuring of cloth and candles; hay trussers; examiners of meat, fish and fowl. There was even an inspector of pigs’ tongues.

This did nothing for efficiency, Law deemed, and served only to make necessities more expensive and to encourage the holders of the offices “to live in idleness and deprive the state of the service they might have done it in some useful profession, had they been obliged to work.” In place of the hundreds of old levies he swept away (over forty in one edict alone), Law introduced a new national taxation system called the denier royal, based on income. The move caused an outcry among the holders of offices, many of whom were wealthy financiers and members of the Parliament, but delight among the public. “The people went dancing and jumping about the streets,” wrote Defoe. “They now pay not one farthing tax for wood, coal, hay, oats, oil, wine, beer, bread, cards, soap, cattle, fish.” (Janet Gleeson, Millionaire; pp. 155-156

Michel Aglietta, in his magisterial work on money, notes that Law…

…wanted to introduce the logic of capitalism in France, based on providing credit through money creation. Money creation had to be based on expected future wealth, and no longer on the past wealth accumulated in precious metals. (Aglietta, p. 206, emphasis in original)

The danger is, if this wealth fails to materialize; or if people lose the belief that it will materialize, confidence in the system is lost, and failure soon follows.

Although John Law has come down in history as a grifter, and his ideas as fundamentally unsound, many of his ideas eventually became fundamental tenets of modern global finance:

The great irony of Law’s life is that his ideas were, from the modern perspective, largely correct. The ships that went abroad on behalf of his great company began to turn a profit. The auditor who went through the company’s books concluded that it was entirely solvent—which isn’t surprising, when you consider that the lands it owned in America now produce trillions of dollars in economic value.

Today, we live in a version of John Law’s system. Every state in the developed world has a central bank that issues paper money, manipulates the supply of credit in the interest of commerce, uses fractional-reserve banking, and features joint-stock companies that pay dividends. All of these were brought to France, pretty much simultaneously, by John Law.

The Invention of Money (The New Yorker)

Law’s efforts left a lingering suspicion of paper money in France. Unfortunately, the revenues problem was not definitively solved. Going back on a specie standard delivered a huge blow to commerce. While England’s paper money system flourished, France stagnated economically. Eventually, the revenues situation of the government became so dire that the King had no choice but to call an Estates General—the extremely rare parliamentary session that kicked off the French Revolution—in 1789.

Once the Mississippi bubble burst, a lot of the capital in France needed some new outlet to invest in. Much of that capital fled across the channel to England, which at the time was inflating a stock bubble of its own:

France’s ruin was England’s gain. Numerous bruised Mississippi shareholders chose to reinvest in English South Sea shares.
The previous month, with a weather eye to developments in France, the South Sea Company managed to beat its rival the Bank of England and secure a second lucrative deal with the government whereby it took over a further $48 million of national debt and launched a new issue of shares. A multitude of English and foreign investors were now descending on London as they had flocked less than a year earlier to Paris “with as much as they can carry and subscribing for or buying shares.”

In Exchange Alley–London’s rue Quincampoix–the sudden surce of new money also bubbled a plethora of alternative companies launched to capitalize on the new fashion for financial fluttering… (Gleeson, p, 200)

2. England

Britain chose a different tack – sovereign debt would be monetized and circulate as money. It too utilized the joint-stock company model that had been invented in the previous centuries to enable the Europeans to raise the funds to exploit and colonize the rest of the world. A bank was founded as a chartered company to take in money through subscribed shares and loan out that money to the King. That debt—and not land—would securitize the notes issued by the bank. The notes would then circulate as money, albeit alongside precious metal coins and several other forms of payment. As with the original invention of sovereign debt in northern Italy, it was used to raise the necessary funds for war:

The modern system for dealing with [the] problem [of funding wars] arose in England during the reign of King William, the Protestant Dutch royal who had been imported to the throne of England in 1689, to replace the unacceptably Catholic King James II.

William was a competent ruler, but he had serious baggage—a long-running dispute with King Louis XIV of France. Before long, England and France were involved in a new phase of this dispute, which now seems part of a centuries-long conflict between the two countries, but at the time was variously called the Nine-Years’ War or King William’s War. This war presented the usual problem: how could the nations afford it?

King William’s administration came up with a novel answer: borrow a huge sum of money, and use taxes to pay back the interest over time. In 1694, the English government borrowed 1.2 million pounds at a rate of eight per cent, paid for by taxes on ships’ cargoes, beer, and spirits. In return, the lenders were allowed to incorporate themselves as a new company, the Bank of England. The bank had the right to take in deposits of gold from the public and—a second big innovation—to print “Bank notes” as receipts for the deposits. These new deposits were then lent to the King. The banknotes, being guaranteed by the deposits, were as good as gold money, and rapidly became a generally accepted new currency.

The Invention of Money (The New Yorker)

From this point forward, money would be circulating government debt. Plus, it’s value would be based on future revenues, as Aglietta noted above, and not just on the amount of gold and silver coins floating around.

The originality of the Bank of England was that it was not a deposit bank. Unlike for the Bank of Amsterdam, the coverage for the notes issued was very low (3 percent in the beginning). These notes, the counterparty to its loans to the state, replaced bills of exchange and became national and international means of payment for the bank’s customers.

They were not legal tender until 1833. But the securities issued by the bank, bringing interest on the public debt, became legal tender for all payments to the government from 1697 onwards. (Aglietta, pp. 136-137)

Why did the King of England have to borrow at all? Well, for a couple reasons. The power to raise taxes had been taken away from the King and given to Parliament as a consequence of the English Revolution. That revolutionary era also witnessed the inauguration goldsmith banking (such as that undertaken by John Law’s own family of goldsmiths). These goldsmith receipts were the forerunners of the banknote:

The English Civil War…broke out because parliament disputed the king’s right to levy taxes without its consent. The use of goldsmith’s safes as secure places for people’s jewels, bullion and coins increased after the seizure of the mint by Charles I in 1640 and increased again with the outbreak of the Civil War. Consequently some goldsmiths became bankers and development of this aspect of their business continued after the Civil War was over.

Within a few years of the victory by the parliamentary forces, written instructions to goldsmiths to pay money to another customer had developed into the cheque (or check in American spelling). Goldsmiths’ receipts were used not only for withdrawing deposits but also as evidence of ability to pay and by about 1660 these had developed into the banknote.

Warfare and Financial History (Glyn Davies, History of Money online)

By this time, control over money had passed into the hands of a rising mercantile class, who—thanks to the staggering wealth produced by globalized trade—possessed more wealth than mere princes and kings, but lacked the ability to write laws or to print money, which they strongly coveted. It was these merchants and “moneyed men” (often members of the Whig party in Parliament) who backed the Dutch staadtholder William of Orange’s claim to the English throne in 1688.

The banknotes began to circulate widely, displacing coins and bills of exchange. And it didn’t stop there: more money was quickly needed, and the Bank acquired more influence. Part of this was due to England being a naval—rather than an army—power. Warships require huge expenditures of capital to build. They also require a vast panoply of resources, such as wood, nails, iron, cloth, stocked provisions, and so forth; whereas land-based armies just require paying soldiers and provisions (which can be commandeered). Thus, financial means to mobilize these resources were much more likely in naval powers such as Holland and England than in continental powers like France, Austria and Spain.

This important post from the WEA Pedagogy blog uses excerpts from Ellen Brown’s Web of Debt to lay out the creation of the Bank of England, and, consequently, central banking in general (and is well-worth reading in full):

William was soon at war with Louis XIV of France. To finance his war, he borrowed 1.2 million pounds in gold from a group of moneylenders, whose names were to be kept secret. The money was raised by a novel device that is still used by governments today: the lenders would issue a permanent loan on which interest would be paid but the principal portion of the loan would not be repaid.

The loan also came with other strings attached. They included:

– The lenders were to be granted a charter to establish a Bank of England, which would issue banknotes that would circulate as the national paper currency.

– The Bank would create banknotes out of nothing, with only a fraction of them backed by coin. Banknotes created and lent to the government would be backed mainly by government I.O.U.s, which would serve as the “reserves” for creating additional loans to private parties.

– Interest of 8 percent would be paid by the government on its loans, marking the birth of the national debt.

The lenders would be allowed to secure payment on the national debt by direct taxation of the people. Taxes were immediately imposed on a whole range of goods to pay the interest owed to the Bank.

The Bank of England has been called “the Mother of Central Banks.” It was chartered in 1694 to William Paterson, a Scotsman who had previously lived in Amsterdam. A circular distributed to attract subscribers to the Bank’s initial stock offering said, “The Bank hath benefit of interest on all moneys which it, the Bank, creates out of nothing.” The negotiation of additional loans caused England’s national debt to go from 1.2 million pounds in 1694 to 16 million pounds in 1698. By 1815, the debt was up to 885 million pounds, largely due to the compounding of interest. The lenders not only reaped huge profits, but the indebtedness gave them substantial political leverage.

The Bank’s charter gave the force of law to the “fractional reserve” banking scheme that put control of the country’s money in a privately owned company. The Bank of England had the legal right to create paper money out of nothing and lend it to the government at interest. It did this by trading its own paper notes for paper bonds representing the government’s promise to pay principal and interest back to the Bank — the same device used by the U.S. Federal Reserve and other central banks today.

Note that the interest on the loan is paid, but never the loan itself. That meant that tax revenues were increasingly funneled to a small creditor class to whom the government was indebted. Today, we call such people bond holders, and they exercise their leverage over governments through the bond markets. For all intents and purposes, this system ended government sovereignty and tied the hands of even elected governments being able to spend tax money on the domestic needs of their own people. Control over the state’s money was lost forever.

An interesting couple of notes: William Paterson was, like John Law, a Scotsman—giving credence to the claim that it was the Scots who “invented Capitalism” (Adam Smith and James Watt were also Scots). It also raises the idea (to me, anyway) that the modern financial system was started by instinctive hustlers and gamblers. We’ve already referred to John Law’s expertise at the gambling tables of Europe and ability to inspire confidence in his schemes. Patterson, upon returning to Scotland, began raising funds via stock for an ambitious scheme to develop a society in Central America. This scheme ended up being on of the worst disasters in history. Not only that, but the Darien scheme collapsed so badly that Scotland’s entire financial health was devastated, and is considered to be a factor in Scotland signing the Acts of Union, politically joining with England to the south.

For an overview of the Darien scheme, see this: Scotland’s lessons from Darien debacle (BBC)

The WEA Pedagogy blog than adds some additional details:

Some more detail of interest is that the creation of Bank of England was tremendously beneficial for England. The King, no longer constrained, was able to build up his navy to counter the French. The massive (deficit) spending required for this purpose led to substantial progress in industrialization.

Quoting Wikipedia on this: “As a side effect, the huge industrial effort needed, including establishing ironworks to make more nails and advances in agriculture feeding the quadrupled strength of the navy, started to transform the economy. This helped the new Kingdom of Great Britain – England and Scotland were formally united in 1707 – to become powerful. The power of the navy made Britain the dominant world power in the late 18th and early 19th centuries”

The post then summarizes the history of the creation of central banking:

…It is in this spirit that we offer a “finance drives history” view of the creation of the first Central Bank. The history above can be encapsulated as follows:

1. Queen Elizabeth asserted and acquired the sovereign right to issue money.
2. The moneylenders (the mysterious 0.1% of that time) financed and funded a revolution against the king, acquiring many privileges in the process.
3. Then they financed and funded the restoration of the aristocracy, acquiring even more privileges in the process.
4. Finally, when the King was in desperate straits to raise money, they offered to lend him money at 8% interest, in return for creating the Bank of England, acquiring permanently the privilege of printing money on behalf of the king.

The process by which money was created by the Bank of England is extremely interesting. They acquired the debt of the King. This debt was used as collateral/backing for the money they created. The notes they issued were legal tender in England. Whenever necessary, they were prepared to exchange them for gold, at the prescribed rates. However, when the confidence of the public is high, the need for actual gold as backing is substantially reduced.

Origins of Central Banking (WEA Pedagogy Blog)

As I noted above, the importance of the Navy in the subsequent industrialization of England is often overlooked. There have been a few scholars who have argued that it was Britain’s emphasis on naval power which was a factor in England (and not somewhere else) becoming the epicenter of the Industrial Revolution. Many of its key inventions were sponsored by the government in order to more effectively fight and navigate at sea (from accurate clocks and charts to canned food). Even early mass production was prompted by the needs of the British Navy: pulley blocks were mass-produced by engineers and were one of the first items made this way via mechanization.

Just like in other countries, the needs of war caused the Bank to issue more and more notes, greatly increasing to the national debt. However, the vast profits of industrialization and colonialism were enough to support it. When convertibility was finally temporarily suspended in the mid 1800s by necessity, paper money continued to carry the trust of the public, unlike in France. Galbraith sums up the subsequent history of the Bank of England:

In the fifteen years following the granting of the original charter the government continued in need, and more capital was subscribed by the Bank. In return, it was accorded a monopoly of joint-stock, i.e., corporate, banking under the Crown, one that lasted for nearly a century. In the beginning, the Bank saw itself merely as another, though privileged, banker.

Similarly engaged in a less privileged way were the goldsmiths, who by then had emerged as receivers of deposits and sources of loans and whose operations depended rather more on the strength of their strong boxes than on the rectitude of their transactions. They strongly opposed the renewal of the Bank’s charter. Their objections were overcome, and the charter was renewed.

Soon, however, a new rival appeared to challenge the Bank’s position as banker for the government. This was the South Sea Company. In 1720, after some years of more routine existence, it came forward with a proposal for taking over the government debt in return for various concessions, including, it was hoped, trading privileges to the Spanish colonies, which, though it was little noticed at the time, required a highly improbable treaty with Spain.

The Bank of England bid strenuously against the South Sea Company for the public debt but was completely outdone by the latter’s generosity, as well as by the facilitating bribery by the South Sea Company of Members of Parliament and the government. The rivalry between the two companies did not keep the Bank from being a generous source of loans for the South Sea venture. All in all, it was a narrow escape.

For the enthusiasm following the success of the South Sea Company was extreme. In the same year that Law’s operations were coming to their climax across the Channel, a wild speculation developed in South Sea stock, along with that in numerous other company promotions, including one for a wheel for perpetual motion, one for ‘repairing and rebuilding parsonage and vicarage houses’ and the immortal company ‘for carrying on an undertaking of great advantage, but nobody to know what it is’. All eventually passed into nothing or something very near.
In consequence of its largely accidental escape, the reputation of the Bank for prudence was greatly enhanced.

As Frenchmen were left suspicious of banks, Englishmen were left suspicious of joint-stock companies. The Bubble Acts (named for the South Sea bubble) were enacted and for a century or more kept such enterprises under the closest interdict.

From 1720 to 1780, the Bank of England gradually emerged as the guardian of the money supply as well as of the financial concerns of the government of England. Bank of England notes were readily and promptly redeemed in hard coin and, in consequence, were not presented for redemption. The notes of its smaller competitors inspired no such confidence and were regularly cashed in or, on occasion, orphaned.
By around 1770, the Bank of England had become nearly the sole source of paper money in London, although the note issues of country banks lasted well into the following century. The private banks became. instead, places of deposit. When they made loans, it was deposits, not note circulation, that expanded, and, as a convenient detail, cheques now came into use. (Galbraith, 32-34)

By a complete accident, Britain was able to escape France’s fate. When the South Sea bubble popped, the Bank of England was able to reliably take up the slack and manage the government’s debt—an option that France did not have, since the central bank and the Company were all part of the same organization, and that organization had a monopoly over loans to the government, tax collection, and money creation.

Next time: An Instrument of Revolution.

The Origin of Paper Money 5

As noted last time, the issuance of printed money by Pennsylvania was highly successful. It increased trade and greatly expanded the economy.

One person who noticed this was a young printer by the name of Benjamin Franklin. At the age of only 23, he wrote a treatise strongly advocating the benefits of printing paper money to increase the domestic money supply.

Franklin arrived in Philadelphia the year paper money was first issued by Pennsylvania (1723), and he soon became a keen observer of and commentator on colonial money…Franklin noted that after the legislature issued this paper money, internal trade, employment, new construction, and the number of inhabitants in the province all in-creased. This feet-on-the-ground observation, this scientific empiricism in Franklin’s nature, would have a profound effect on Franklin’s views on money throughout his life. He will repeat this youthful observation many times in his future writings on money.

Benjamin Franklin and the Birth of a Paper Money Economy

Franklin had noted the effects that the chronic shortage of precious metal coins had on the local economy. Something needed to be done, he thought. Franklin, of course, being a printer by trade, felt that his printing presses might be the solution to this problem.

Franklin’s proposal–and this was key–was that paper money could not be backed by silver and gold; because the lack of silver and gold was what the paper money was designed to rectify in the first place!

Franklin also noted a point that critics of the gold standard have made ever since: the value of gold and silver is not stable, but fluctuates over time with supply and demand, just like everything else! Backing one’s currency by specie was no guarantee of stable prices or a stable money supply. As was seen in Europe, a sudden influx could send prices soaring, and a dearth would send prices crashing. As we’ll see, this was a major problem with precious metal standards throughout the nineteenth century—a point conspicuously ignored by goldbugs. Instead, he proposed a land bank, which, as we saw earlier, was a very popular idea at this time. Even though the colonies didn’t have sources of previous metals—and couldn’t mint them even if they did—they did have an abundant supply of real estate, far more than Europe, in fact. Land could be mortgaged, and the mortgages would act as backing for the new government-issued currency.

Economist (and Harry Potter character) Farley Grubb has written a definitive account of Franklin’s proposal:

Franklin begins his pamphlet by noting that a lack of money to transact trade within the province carries a heavy cost because the alternative to paper money is not gold and silver coins, which through trade have all been shipped off to England, but barter. Barter, in turn, increases the cost of local exchange and so lowers wages, employment, and immigration. Money scarcity also causes high local interest rates, which reduces investment and slows development. Paper money will solve these problems.

But what gives paper money its value? Here Franklin is clear throughout his career: It is not legal tender laws or fixed exchange rates between paper money and gold and silver coins but the quantity of paper money relative to the volume of internal trade within the colony that governs the value of paper money. An excess of paper money relative to the volume of internal trade causes it to lose value (depreciate). The early paper monies of New England and South Carolina had depreciated because the quantities were not properly controlled.

So will the quantity of paper money in Pennsylvania be properly controlled relative to the demands of internal trade within the province?

First, Franklin points out that gold and silver are of no permanent value and so paper monies linked to or backed by gold and silver, as with bank paper money in Europe, are of no permanent value. Everyone knew that over the previous 100 years the labor value of gold and silver had fallen because new discoveries had expanded supplies faster than demand. The spot value of gold and silver could fluctuate just like that of any other commodity and could be acutely affected by unexpected trade disruptions. Franklin observes in 1729 that “we [Pennsylvanians] have already parted with our silver and gold” in trade with England, and the difference between the value of paper money and that of silver is due to “the scarcity of the latter.”

Second, Franklin notes that land is a more certain and steady asset with which to back paper money. For a given colony, its supply will not fluctuate with trade as much as gold and silver do, nor will its supply be subject to long-run expansion as New World gold and silver had been. Finally, and most important, land cannot be exported from the province as gold and silver can. He then points out that Pennsylvania’s paper money will be backed by land; that is, it will be issued by the legislature through a loan office, and subjects will pledge their lands as collateral for loans of paper money.

Benjamin Franklin and the Birth of a Paper Money Economy

Franklin argued that the amount of money circulating would be self-correcting. If too little was issued, he said, falling prices would motivate people to mortgage their land to get their hands on more bills. If too much money was circulating, its value would fall, and mortgagees would use the cheaper notes to buy back their land, thus retiring the notes from circulation and alleviating the oversupply.

Finally, Franklin argues that “coined land” or a properly run land bank will automatically stabilize the quantity of paper money issued — never too much and never too little to carry on the province’s internal trade. If there is too little paper money, the barter cost of trade will be high, and people will borrow more money on their landed security to reap the gains of the lowered costs that result when money is used to make transactions. A properly run land bank will never loan more paper money than the landed security available to back it, and so the value of paper money, through this limit on its quantity, will never fall below that of land.

If, by chance, too much paper money were issued relative to what was necessary to carry on internal trade such that the paper money started to lose its value, people would snap up this depreciated paper money to pay off their mortgaged lands in order to clear away the mort-gage lender’s legal claims to the land. So people could potentially sell the land to capture its real value. This process of paying paper money back into the government would reduce the quantity of paper money in circulation and so return paper money ’s value to its former level.

Automatic stabilization or a natural equilibrium of the amount of paper money within the province results from decentralized market competition within this monetary institutional setting. Fluctuations in the demand for money for internal trade are accommodated by a flexible internal money supply directly tuned to that demand. This in turn controls and stabilizes the value of money and the price level within the province.

Benjamin Franklin and the Birth of a Paper Money Economy

Given that the United States was the major pioneer in the Western world for a successful paper fiat currency, it is ironic that we have become one of the centers for resistance to the very idea today. This in large part due to the bottomless funding by billionaire libertarian cranks to promote shaky economic ideas in the United States, such as Austrian Economics, whereas in the rest of the world common-sense prevails. Wild, paranoid conspiracy theories about money (and just about everything else) also circulate widely in the United States, much more widely than the rest of the developed world which has far better educational systems.

Returning to the gold standard is—bizarrely—appropriated by people LARPing the American Revolution today in tri-corner hats, proclaiming themselves as the only true “patriots”. Yet, as we’ve seen, the young United States was the world’s leading innovator in issuing paper money not backed by gold–i.e. fiat currency. And this led to its prosperity. Founding Father Benjamin Franklin was a major advocate of paper money not backed by gold. This is rather inconvenient for libertarians (as is most of actual history).

The young have always learned that Benjamin Franklin was the prophet of thrift and the exponent of scientific experiment. They have but rarely been told that he was the advocate of the use of the printing press for anything except the diffusion of knowledge. (Galbraith, p. 55)

That’s right, Ben Franklin was an advocate of “printing money.” Something to remember the next time a Libertarian glibly sneers at the concept. Later advocates of “hard money”, i.e. goldbugs like Andrew Jackson, would send the U.S. economy crashing to its knees in the early nineteenth century by returning to a gold standard.

Here’s Galbraith describing the theory behind paper money:

There is very little in economics that invokes the supernatural. But by one phenomenon many have been tempted. In looking at a rectangular piece of paper, on frequent occasion of indifferent quality, featuring a national hero or monument or carrying a classical design with overtones of Peter Paul Rubens, Jacques Louis David or a particularly well-stocked vegetable market and printed in green or brown ink, they have been assailed by the question: Why is anything intrinsically so valueless so obviously desirable? What, in contrast to a similar mass of fibres clipped from yesterday’s newspaper, gives it the power to command goods, enlist service, induce cupidity, promote avarice, invite to crime? Surely some magic is involved; certainly some metaphysical or extraterrestrial explanation of its value is required. The priestly reputation and tendency of people who make a profession of knowing about money have been noted. Partly it is because such people are thought to know why valueless paper has value.

The explanation is wholly secular; nor is magic involved.

Writers on money have regularly distinguished between three types of currency:

(1) that which owes its value, as do gold and silver, to an inherent desirability derived from well-established service to pride of possession, prestige of ownership, personal adornment, dinner service or dentistry;

(2) that which can be readily exchanged for something of such inherent desirability or which carries the promise, like the early Massachusetts Bay notes, of eventual exchange; and

(3) currency which is intrinsically worthless, carries no promise that it will be redeemed in anything useful or desirable and which is sustained, at most, by the fiat of the state that it be accepted.

In fact, all three versions are variations on a single theme.

John Stuart Mill…made the value of paper money dependent on its supply in relation to the supply of things available for purchase.
Were the money gold or silver, there was little chance, the plethora of San Luis Potosí or Sutter’s Mill apart, for the amount to increase unduly. This inherent limit on supply was the security that, as money, it would be limited in amount and so retain its value.

And the same assurance of limited supply held for paper money that was fully convertible into gold and silver. As it held for paper that could not be converted into anything for so long as the supply of such paper was limited. It was the fact of scarcity, not the fact of intrinsic worthlessness, that was important. The problem of paper was that, in the absence of convertibility, there was nothing to restrict its supply. Thus it was vulnerable to the unlimited increase that would diminish or destroy its value.

The worthlessness of paper is a detail. Rock quarried at random from the earth’s surface and divided into units of a pound and upward would not serve very happily as currency. So great would be the potential supply that the weight of rock for even a minor transaction would be a burden. But rock quarried on the moon and moved to the earth, divided and with the chunks duly certified as to the weight and source, though geologically indistinguishable from the earthbound substance, would be a distinct possibility, at least for so long as the trips were few and the moon rock retained the requisite scarcity. pp. 62-64

NEXT: England and France get on the paper money train. England succeeds; France fails.

The Origin of Paper Money 4

What’s often considered to be the first recorded issuance of government-backed paper money in the Western world was in Colonial America, and it was quite by accident.

There had been precedents, but they were very limited, and we only know about them through historical records. Paper money tends to disappear in the archaeological record, while coins survive, which means that earlier experiments in paper money may simply be lost to history.

Repeatedly in the European records we find mention of money made from leather during times of warfare and siege. Reports indicate that European monarchs occasionally used paper money during periods of crisis, usually war, and they do maintain that in Catalonia and Aragon, James I issued paper money in 1250, but no known examples have survived. Then, when the Spanish laid siege to the city of Leyden in the Lowlands in 1574, Burgomeister Pieter Andriaanszoon collected all metal, including coins, for use in the manufacture of arms. To replace the coins, he issued small scraps of paper.

On July 1661, Sweden’s Stockholm Bank issued the first bank note in Europe to compensate for a shortage of silver coins. Although Sweden lacked silver, it possessed bountiful copper resources, and the government of Queen Christina (1634-1654) issued large copper sheets called platmynt (plate money), which weighed approximately 4 pounds each. In 1644 the government offered the largest coins ever issued: ten-daler copper plates, each of which weighed 43 pounds, 7 1/4 ounces. To avoid having to carry such heavy coins, merchants willingly accepted the paper bills in denominations of one hundred dalers. one such bill could be submitted for 500 pounds of copper plates. (Weatherford, p. 130)

For an example of platmynt, see this link: Swedish “plate money” (TYWKIIDBI)

The issuance was by Massachusetts in 1690. It was in the form of government IOU’s issued to pay for a failed raid on Quebec which was successfully repelled. Due to the failure of the raid, the expected booty to pay for the cost of the expedition did not materialize. The government, reluctant to raise taxes to pay for an expedition that was a failure, issued IOU’s instead. Due to the shortage of metal coins, these IOU’s began circulating at their face value as a substitute for coins. And thus, by accident, paper money was created in the Western world:

The first issue of paper money was was by the Massachusetts bay Colony in 1690; it has been described as ‘not only the origin of paper money in America, but also in the British empire, and almost in the Christian world’. It was occasioned, as noted, by war.

In 1690, Sir William Phips – a man whose own fortune and position had been founded on the gold and silver retrieved from a wrecked Spanish galleon near the shores of what is now Haiti and the Dominican Republic – led an expedition of Massachusetts irregulars against Quebec. The loot from the fall of the fortress was intended to pay for the expedition. The fortress did not fall.

The American colonies were operating on negligible budgets…and there was no enthusiasm for levying taxes to pay the defeated heroes. So notes were issued to the soldiers promising eventual payment in hard coin. Redemption in gold or silver, as these were returned in taxes, was promised, although presently the notes were also made legal tender for taxes. (Galbraith, pp. 51-52)

The colonial government intended to quickly redeem the certificates with tax revenues, but the need for money was so great that the certificates began changing hands, like money…[1]…For the next twenty years the notes circulated side by side with gold and silver of equivalent denomination. Notes and metal being interchangeable, there was pro tanto, no depreciation. (p. 52)…

The practice quickly caught on among the colonies as a means of supplying a circulating currency. The issuances were to be temporary, in fixed amounts, and accompanied by taxes and custom duties to redeem them. [1]

To retire these bills on credit, the colonial governments accepted them—along with specie—in payment of taxes, fines and fees. As with “bills on loan” the governments used any specie that they received in tax payments to retire and then burn the notes. Also like “bills on loan,” the notes circulated freely within the colonies that issued them and sometimes in adjacent colonies. [1]

[1] Paper Money and Inflation in Colonial America

This circulation of these paper IOUs gave cash-strapped governments an idea. Governments could issue IOUs (hypothetically redeemable in gold and silver coins) in lieu of levying taxes to enable the government to pay for stuff. Such IOUs could then circulate as cash money—valuable because they would theoretically be redeemed by governments for gold and silver, or be used to discharge debt obligations to the state like taxes, fines and fees. As noted above, when gold and silver did come into the state’s coffers, they could buy back the notes.

Unlike modern paper money, these IOUs typically had an expiration date. By redeeming the issued notes, the government could remove paper from circulation, lowering its debt obligations, while at the same time preventing paper money from losing too much of its value. (Note similarities with the Chinese system).

And so, as the 1700’s dawned, colonial governments started commonly issuing paper money—colonial scrip—in lieu of taxes to goose the domestic economy by increasing the amount of money in circulation. Since precious metals were in short supply, most of these schemes were based on the land banking concept (i.e. monetizing land):

The Pennsylvania legislature issued its first paper money in 1723 — a modest amount of £15,000 (the equivalent of just over 48,000 Spanish silver dollars), with another £30,000 issued in 1724. This paper money was not linked to or backed by gold and silver money. It was backed by the land assets of subjects who borrowed paper money from the government and by the future taxes owed to the government that could be paid in this paper money…after the legislature issued this paper money, internal trade, employment, new construction, and the number of inhabitants in the province all increased…The initial paper money issued in 1723 was due to expire in 1731. (Typically, paper money was issued with a time limit within which it could be used to pay taxes owed to the issuing government — the money paid in being removed from circulation.)

Benjamin Franklin and the Birth of a Paper Money Economy (PDF)

In addition to funding military spending, one major driver behind colonial governments issuing IOUs as currency came from the extreme recalcitrance of the colonists in paying their allotted taxes, which means that this unfortunate tendency was present in America from the very beginning, as Galbraith notes:

A number of circumstances explain the pioneering role of the American colonies in the use of paper money. War, as always, forced financial innovation. Also, paper money…was a substitute for taxation, and, where taxes were concerned, the colonists were exceptionally obdurate; they were opposed to taxation without representation, as greatly remarked, and the were also, a less celebrated quality, opposed to taxation with representation. ‘That a great reluctance to pay taxes existed in all the colonies, there can be no doubt. it was one of the marked characteristics of the American people long after their separation from England.’ (Galbraith, pp. 46-47)

In subsequent years, the various colonial governments would rely on more and more on issuing paper money. And when they did, it was noted, the volume of trade increased, and local economies expanded. There was always, however, the looming threat of too much colonial scrip being issued by governments, leading to depreciation:

Inevitably, however, it occurred to the colonists that the notes were not a temporary, one-time expedient but a general purpose alternative to taxation. More were issued as occasion seemed to require, and the promised redemption was repeatedly postponed.

Prices specified in the notes now rose; so, therewith did the price of gold and silver. By the middle of the eighteenth century the amount of silver or gold for which the note could be exchanged was only about a tenth of what it had been fifty years before. Ultimately the notes were redeemed at a few shillings to the pound from gold sent over to pay for the colonial contribution to Queen Anne’s War.

Samuel Eliot Morison has said of the notes issued by Massachusetts to pay off the soldiers back from Quebec that they were ‘a new device in the English-speaking world which undermined credit and increased poverty’. Other and less judicious historians have reflected the same view. But it is also known that rising prices stimulate the spirits of entrepreneurs and encourage economic activity just as falling prices depress both.

Were only so much paper money issued by a government as to keep prices from falling or, at most, cause a moderate increase, its use could be beneficial. Not impoverishment, but an increased affluence would be the result.

The question, obviously, is whether there could be restraint, whether the ultimate and impoverishing collapse could be avoided. The Law syllogism comes ominously to mind: If some is good, more must be better. (Galbraith, pp. 52–53)

The use of paper money as an alternative to government borrowing began to spread. More and more colonial governments (there obviously was no national government back then) would issue IOUs as a way to get around chronic shortages of gold and silver coins, and to avoid raising taxes. Meanwhile, although Europe had begun to experiment with paper money, it was still tied to amounts of gold and silver, limiting its application.

The results in the colonies were highly mixed. Some experiments were highly successful; other less so:

…the other New England colonies and South Carolina had also discovered paper money…Restraint was clearly not available in Rhode Island of South Carolina or even in Massachusetts. Elsewhere, however, it was present to a surprising extent. The Middle Colonies handled paper money with what must now be regarded as astonishing skill and prudence…The first issue of paper money there was by Pennsylvania in 1723. Prices were falling at the time, and trade was depressed. Both recovered, and the issue was stopped.

There appear to have been similar benefits from a second issue in 1729; the course of business and prices in England in the same years suggests that, in the absence of such action, prices would have continued down. Similar issues produced similarly satisfactory results in New York, New Jersey, Delaware and Maryland. As in Pennsylvania, all knew the virtue of moderation. (Galbraith, pp. 52-53)

Perhaps the most intriguing experiment was done by the state of Maryland. It had a combination of what looks like a UBI scheme, coupled with a public banking system (à la North Dakota):

The most engaging experiment was in Maryland. Elsewhere the notes were put into circulation by the simple device of using them to pay public expenses. Maryland, in contrast, declared a dividend of thirty shillings to each taxable citizen and, in addition, established a loan office where worthy farmers and businessmen could obtain an added supply which they were required to repay.

Remarkably, this dividend was a one-time thing; as in the other Middle Colonies the notes so issued were ultimately redeemed in hard money. A near contemporary historian with a near-gift for metaphor credited the experiment with ‘feeding the flame of industry that began to kindle’. A much later student has concluded that ‘this was the most successful paper money issued by any of the colonies’.

Two centuries later during the Great Depression a British soldier turned economic prophet, Major C.H. Douglas, made very nearly the same proposal. This was Social Credit. Save in much distant precincts as the Canadian Prairies, he acquired general disesteem as a monetary crank. He was two hundred years too late. (Galbraith, pp. 53-54)

Interestingly, we see some of those ideas once again being floated once again today.

As Galbraith notes, later economic historians would focus exclusively on the failures of such early experiments, and deliberately ignore the places were it was successful. Much of this was based on the “gold is real money” ideology, along with the ideas around the “inherent profligacy of governments” which would invariably cause inflation. In other words, the groupthink of the economics priesthood:

Towards the end of the nineteenth century expanding university facilities, an increased interest in the past and a pressing need for subjects on which to do doctoral theses and other scholarly research all led to a greatly expanded exploration of colonial economic history. By then, among historians and economists, the gold standard had become an article of the highest faith. Their research did not subordinate faith to fact.

By what amounted to a tacit understanding between right-thinking men the abandoned tendencies of Rhode Island, Massachusetts and South Carolina were taken to epitomize the colonial monetary experience. The different experience of the Middle Colonies was simply ignored.

A leading modern student of colonial monetary experience has noted that: ‘One looks in vain for any discussion of these satisfactory currency experiments in the standard works on American monetary and financial history.’ Another has concluded that ‘…generations of historical scholarship have fostered a mistaken impression of the monetary practices of the colonies’. (Galbraith, pp. 54-55)

History repeats itself: today we once again have an economics caste wedded to orthodoxy and unwilling to consider alternative points of view. The MMT school of economics is fighting a lonely battle against this tendency today.

Next: Ben Franklin discovers money printing

The Origin of Paper Money 3

Despite paper instruments like bills of exchange having existed for centuries, for most ordinary people, money was exclusively the gold and silver coins minted by various national governments. Gold was used for high-value transactions, and silver for smaller ones. When the precious metals from the New World began flowing into Europe, the amount of coins dramatically increased, leading to a continent-wide bout of inflation.

The Spanish, the major beneficiaries of this increased money supply from silver mines of Bolivia and Mexico, used the money to purchase all sorts of things from abroad and live large. Because they became so filthy rich with very little effort (the enslaved Native Americans did all the hard work of digging out the silver), the Spanish failed to develop any domestic industries or innovate much, and thus were passed over by the more industrious Northern Europeans—much like a wealthy, spoiled heir who never learns any practical skills until the money runs out—and by then it’s too late.

There were many in Europe after 1493 who knew only distantly of the discovery and conquest of lands beyond the ocean seas, or to whom this knowledge was not imparted at all. There were few, it can be safely said, who did not feel one of its principal consequences.

Discovery and conquest set in motion a vast flow of precious metal from America to Europe, and the result was huge rise in prices – an inflation occasioned by an increase in the supply of the hardest of hard money.

Almost no one in Europe was so removed from market influences that he did not feel some consequence in his wage, in what he sold, in whatever trifling thing he had to buy.

The price increases occurred first in Spain where the metal first arrived; then, as they were carried by trade (or perhaps in lesser measure by smuggling or for conquest) to France, the Low Countries and England, inflation followed there.

In Andalusia, between 1500 and 1600, prices rose perhaps fivefold. In England, if prices during the last half of the fifteenth century, i.e. before Columbus, are taken as 100, by the last decade of the sixteenth century they were roughly at 250; eighty years later, by the decade of 1673 through 1682, they were around 350, up by three-and-a-half times from the level before Columbus, Cortez and the Pizarros. After 1680, they levelled off an subsided, as much earlier they had fallen in Spain. (Galbraith, pp. 8-9)

Prior to this era, Europe had dealt with ongoing, chronic shortages of precious metals for coins, because much of the continent’s silver leaked out through trading with the Arab world, especially after the Crusades. This is why much of the European economy remained unmonetized for so long. In fact, northern Italian bankers had invented banking and bills of exchange specifically to deal with this problem. Thus, markets in Europe remained confined to specific market towns and “ports of trade” and were subject to strict regulations by rulers. It was not a lack of desire for profits on the part of rulers, but a lack of coins that kept capitalism in embryo.

The vast increase in the money supply from New World silver and gold is what made capitalism possible in Western Europe, but that’s a story for another time.

At its peak in the early 17th century, 160,000 native Peruvians, slaves from Africa and Spanish settlers lived in Potosí to work the mines around the city: a population larger than London, Milan or Seville at the time. In the rush to exploit the silver, the first Spanish colonisers occupied the locals’ homes, forgoing the typical colonial urban grid and constructing makeshift accommodation that evolved into a chaotic mismatch of extravagant villas and modest huts, punctuated by gambling houses, theatres, workshops and churches.

High in the dusty red mountains, the city was surrounded by 22 dams powering 140 mills that ground the silver ore before it was moulded into bars and sent to the first Spanish colonial mint in the Americas. The wealth attracted artists, academics, priests, prostitutes and traders, enticed by the Altiplano’s icy mysticism. “I am rich Potosí, treasure of the world, king of all mountains and envy of kings” read the city’s coat of arms, and the pieces of eight that flowed from it helped make Spain the global superpower of the period.

Potosí: The mountain of silver that was the world’s first global city (Aeon)

How silver turned Potosí into ‘the first city of capitalism’ (The Guardian)

This price spike led to an important realization that people started to have after prices finally leveled off in the late 1600’s: the number of economic transactions (and hence the overall size of the economy and the capacity to specialize) was dependent on the amount of money in circulation. In other words, the volume of trade is determined by the amount of currency in circulation.

Today this is known as the quantity theory of money.

This newfound abundance of silver in Europe caused rising prices–the so-called “Price Revolution”. For the first time there was enough money to create a new class of people whose wealth consisted primarily of money as oppose to land: moneyed men, or the merchant caste. It also caused Spanish coins to be widely used and distributed, function as the world’s first global currency from the Americas to the Middle East to Asia:

The silver of the America made possible a world economy for the first time, as much of it was traded not only to the Ottomans but to the Chinese and East Indians as well, bringing all of them under the influence of the new silver supplies and standardized silver values. Europe’s prosperity boomed, and its people wanted all the teas, silks, cottons, coffees, and spices which the rest of the world had to offer. Asia received much of this silver, but it too experienced the silver inflation that Europe underwent. In China, silver had one-fouth the value of gold in 1368, before the discover of America, but by 1737 the ratio plummeted to twenty to one, a decline of silver to one-fifth of its former value. This flood of American silver came to Asia directly from Acapulco across the pacific via Manila in the Philippines, whence it was traded to China for spices and porcelain. (Weatherford, Indian Givers, pp. 16-17)

The so-called “Price Revolution” taught Europeans another important lesson: What constituted money didn’t change, but it’s purchasing power did. Therefore, they concluded, the value of money depended on how much of it there was in circulation, and not on some intrinsic quality. If there was a shortage of cash, it was worth a lot (i.e. it had high purchasing power). If there was a surplus, it wasn’t worth nearly as much (i.e. it had lower purchasing power). They had seen this first-hand.

In other words, the value of money had to do with how much of it there was, more than any intrinsic, magical quality. The value attributed gold and silver was merely a cultural artifact.

In fact, money had to be useless, since if it were more useful as a commodity than as money, then that’s what it would be used for, and there would be perennial shortages of currency causing the economy to contract.

This led to the following conclusions: If money has no inherent value, but was merely an expedient for spot transactions, than why not paper? But it does have to be backed by something, otherwise people will lose confidence in it. Although precious metal coins could be devalued by government edicts, their worth could never fall to zero, since there was always a commodity market for gold and silver for things like jewelry and tea sets. Precious metals tended to flow from where they were undervalued to countries where the commodity price was higher, causing perennial spot shortages throughout Europe, along with the requisite economic chaos.

The basic problem people were struggling with was that, since all money at the time was dependent on precious metals, how could you increase the supply of money without stumbling upon new sources of precious metal, as the Spanish had done? The money in circulation had to be increased—that was obvious to a growing number of people. But the low-hanging fruit of gold and silver had already been harvested. And with vast new material wealth continuing to flow into Europe from the Americas, how could the money supply be increased enough to take advantage of this?

Paper was an obvious solution. Paper had come to Europe in the Middle Ages from China. After the Black Death, many of the cotton clothes worn by the deceased were turned into pulp, which helped spread the use of paper, and indirectly drive the commercial revolution of the Middle Ages, along with innovations like Arabic numerals and double-entry bookkeeping (aka the “Venetian method”). The printing press, invented in Mainz in 1502 by Gutenberg, further enhanced the power of paper printing. But the real use of paper was in banking:

In the West, paper found its most important use as a means of keeping ledgers in banks. Long before it was used as a means of printing more money, it was used by bankers to increase the money supply. Only later did it gradually emerge as a replacment for coins in daily commerce. The initial development and circulation of monetary bills of paper came about as a side effect of banking. (Weatherford, p. 128)

Paper instruments of credit were already widely circulating throughout Europe, such as Bills of Exchange. Yet, underneath it all, money was still ultimately tied to finite amounts of precious metal. Paper checks were simply transfers of monies from one account to another, similar to giro banking in the ancient world, while Bills of exchange were:

“…essentially a written order to pay a fixed sum of money at a future date. Bills of exchange were originally designed as short-term contracts but gradually became heavily used for long-term borrowing. They were typically rolled over and became de facto short-term loans to finance longer-term projects…bills of exchange could be re-sold, with each seller serving as a signatory to the bill and, by implication, insuring the buyer of the bill against default…”

Crisis Chronicles: The Commercial Credit Crisis of 1763 and Today’s Tri-Party Repo Market (Liberty Street)

One solution was just to issue credit in excess of the amount of gold and silver stored in your vaults—the so-called “goldsmith’s trick.” This became especially common around the time of he English Revolution, where goldsmiths acted as moneylenders and bankers. As long as there was enough gold and silver sitting in the vault to cover the amount people showing up to exchange their paper, you were all right. But if more paper was redeemed than the gold and silver you had at any one point, you were doomed. This is why governments were reluctant to embrace such a solution (later, this idea would underpin fractional reserve banking).

The question ultimately boiled down to, if not gold and silver, then what would give paper money its value? And what would limit its supply? Otherwise, any enterprising printer could just print up money in any amount and give it to himself. Ultimately, the answers would come down to some sort of government authority to regulate the issuance of such bills, and back it up with the government’s credit.

One very common idea floating around in the late 1600s and early 1700s were proposals for a land bank–essentially monetizing land. Such banks wouldn’t take deposits in gold or silver; Rather, they would issue government-backed paper money securitized by mortgages on land. “In these early cases the term “bank” meant simply the collection or batch of bills of credit issued for a temporary period. If successful, reissues would lead to a permanent institution or bank in the more modern sense of the term.” After all, even if a country didn’t have gold and silver mines, it did always have land. Land was valuable, and inherently limited in supply–even moreso than gold and silver (“Buy land – they aren’t making any more if it,” said Mark Twain). This was a variant of the idea of paper money as a claim on real resources. However, the problem was much the same as with the goldsmith’s trick: what happens if you print money in excess of the underlying resources?

[I]f we look at the world through the lens of the late 17th century…[m]oney was made of metal, and there was therefore no scope for creating more money without finding new supplies of silver and gold. There were two types of wealthy individual: moneyed men and landed men.

The land bank proponents were early contributors to the economic debate. In their pamphlets the principal problem that they identified was the sluggish economy. They all agreed that the situation could be improved and saw the best means of improvement as an increase in the supply of money.

Rather than doing this as the Spanish and Portuguese did by sailing to the new world and bringing back vast quantities of precious metals, they proposed using the banking model that had succeeded in Amsterdam and Venice. According to Schumpeter, they “fully realised the business potentialities of the discovery that money – and hence capital in the monetary sense of the term – can be manufactured or created”.

Britain, which was not rich in terms of gold and silver, had plenty of potential in its land. Therefore, a land bank appeared to be a sensible suggestion. None of the land banks that were set up succeeded…

Land Bank Proposals 1650-1705 (PDF)

Land banks had already been established in the American Colonies in a limited fashion:

In 1686, Massachusetts established the first American land bank. Others soon followed.

Despite the name, these were not true banks; they did not accept deposits. Instead, they issued “banks” or notes, or “bills on loan,” to borrowers who put up land as collateral with the bank.

To fortify confidence in the notes, colonial governments promised to issue only a fixed amount of notes and for a set term and to secure their loans with collateral typically equal to twice the amount of the loan.

These notes soon became legal tender for all public and private debts. Principal and interest payments were due annually, but the bank often delayed the first principal payment for a few years. Payments had to be made in notes or in specie.

While the notes furnished a circulating currency, the interest payments provided a revenue stream to the colonial governments.

Paper Money and Inflation in Colonial America (Owen F. Humpage)

National land banks were proposed in the early 1700’s by two people who would become very influential in the history of paper money: John Law (for France) and Benjamin Franklin (for Pennsylvania). Later on, this idea would be used by the revolutionary French government to back its own paper currency called assignats. They used the land seized from the Catholic Church and some aristocrats to back the money. And there was a lot of this land—the Church owned an estimated one-fifth of all the land in France prior to the Revolution.

We can think of this as the very earliest rumblings of today’s Modern Monetary Theory (MMT). Money wasn’t gold and silver after all—rather, it was any means of exchange by which trade was conducted. The medium could be anything, so long it retained its value in exchange. What really mattered was the supply of it: that it was somewhat commensurate with the amount of economic transactions desired. The Scotsman John Law, who would establish the first paper money system in France, had seen people at the gambling tables of England using bills of exchange, stocks, bonds, banknotes, IOUs—any sort of valuable paper instrument—as de facto money in a pinch. This gave him the essential insight that any paper people believed had intrinsic value could be used as money, not just gold and silver coins:

[John] Law thought that the important thing about money wasn’t its inherent value; he didn’t believe it had any. “Money is not the value for which goods are exchanged, but the value by which they are exchanged,” he wrote. That is, money is the means by which you swap one set of stuff for another set of stuff. The crucial thing, Law thought, was to get money moving around the economy and to use it to stimulate trade and business.

As Buchan writes, “Money must be turned to the service of trade, and lie at the discretion of the prince or parliament to vary according to the needs of trade. Such an idea, orthodox and even tedious for the past fifty years, was thought in the seventeenth century to be diabolical.”

The Invention of Money (The New Yorker)

What was undeniable was that the growing economies of the North Atlantic needed more money, and lots of it; far in excess of what any gold and silver mines anywhere in the world could reasonably provide.

Next: The first (Western) paper money

The Origin of Paper Money 2

When it comes to paper money in the West, the foremost innovator was the United States, as John Kenneth Galbraith points out:

If the history of commercial banking belongs to the Italians and of central banking to the British, that of paper money issued by a central government belongs indubitably to the Americans. (Galbraith, p. 45)

The reason the American colonies had to experiment with paper money was simple: “official” money in the American Colonies was gold and silver coins, and there was a perennial shortage of such coins.

The American colonies had no rich deposits of gold of silver, unlike the Spanish in Latin America. There were no mines, and, to make things worse, there no mints allowed in North America. And, to top it all off, the British government forbade the colonies from chartering banks, “Thus bank notes, the obvious alternative to government notes, were excluded.” (Galbraith, p. 47). Colonists used whatever coins they could get their hands on, most of which came from the Spanish colonies to the south. In particular, this meant the Spanish Peso de Ocho Reales, or Piece of Eight: the world’s first global currency. This was also the origin of the famed dollar $ign. Foreign coins would continue to circulate as money in the United States until after the Civil War.

The curious origin of the dollar symbol (BBC)

Since the colonies couldn’t mint their own coins, if you wanted to get your hands on gold and silver coins, you had no other choice but to trade with the outside world. If you didn’t trade with the outside world, then getting sufficient coins was really difficult, severely limiting internal trade. This wasn’t accidental—the British, like all colonial powers, wanted the colonies to be sources of raw materials for their domestic manufacturing industries, and not to be economically self-sufficient.

To help alleviate the ongoing shortage of previous metal coins, local authorities might have passed laws to restrict the export of gold and silver–what we would today call capital controls—but such laws were expressly forbidden by the British government. In the mercantilist world of the 1600-1700s, the strength of a nation lay in the amount of gold and silver stashed away in its vaults—probably a holdover from the time when gold and silver paid for mercenaries in Europe before the era of professional standing armies.

And so there was a perennial, ongoing shortage of currency for transactions. This was an anchor around the leg of the domestic economy of the colonies.

…the British colonies in North America suffered from a constant shortage of all coins. The mercantile policies then in vogue in London sought to increase the amount of gold and silver money in Britain and to do whatever was practical in order to prohibit its export, even to its own colonies.

Beginning in 1695, Britain forbade the export of specie to anywhere in the world, including to its own colonies. As a result, the American colonies were forced to use foreign silver coins rather than British pounds, shillings, and pence, and they found the greatest supply of coins in the neighboring Spanish colony of Mexico, which operated one of the world’s largest mints.

Because of the great wealth produced in Mexico and Peru, Spanish coins became the most commonly accepted currency in the world…The most common Spanish coin in use in the British colonies in 1776 was the pillar dollar, so named because the obverse side showed the Eastern and Western hemispheres with a large column on either side.

In Spanish imperial iconography, the columns represented the Pillars of Hercules, or the narrow strait separating Spain from Morocco and connecting the Mediterranean with the Atlantic. A banner hanging from the columns bore the words plus ultra, meaning “more beyond.” The Spanish authorities began issuing this coin almost as soon as they opened the mint in Mexico with the intent of publicizing the discovery or America, which was the plus ultra, the land out beyond the Pillars of Hercules.

Some people say that the modern dollar sign is derived from this pillar dollar. According to this explanation, the two parallel lines represent the columns and the S stands for the shape of the banner hanging from them. Whether the sign was inspired by this coin or not, the pillar dollar can certainly be called the first American silver dollar. (Weatherford, pp. 117-118)

Another thing the colonists did to get around this chronic shortage of metal coins was barter, which led to settling accounts with all sorts of things other than previous metal coins. They might settle accounts, for example, with so-called “county pay” or “country money,” typically cash crops: cod, tobacco, rice, grain, cattle, indigo, whiskey, brandy–whatever was at hand. During 1775 in North Carolina as many as seventeen different forms of money were declared to be legal tender.

Without the convenience of money, colonists resorted to many less-efficient methods of trading. Barter, of course, was common, particularly in rural areas, but individuals often had to accept goods that they did not particularly need or want only because they had no other way to complete a transaction. They accepted these goods hoping to pass them on in future trades. Some items, most famously tobacco in Virginia and Maryland, worked well in this way and became commodity monies directly or as backing for warehouse receipts. Various other types of warehouse receipts, bills of exchange against deposits in London, and individuals’ promissory notes might also circulate as money. In addition, shopkeepers and employers sometimes issued “shop notes,” a type of scrip—often in small denominations—redeemable at a specific store.

Out of necessity, merchants and wealthy individuals frequently extended credit to others. In an economy that depended heavily on barter, however, one could end up holding debts against many individuals and across a broad array of goods. People naturally hoped to net out some of these debts, but this is extremely difficult under barter. Fortunately, colonial creditors could tally debts in British pounds or colonial currencies even if these currencies were not readily available. In this way, money acted as a unit of account. By attaching a value to things, money accommodated the netting out of debts.

Paper Money and Inflation in Colonial America (Cleveland Fed)
One of the most popular substitutes in North America could be obtained domestically: beads made from marine sea shells called wampum, which were used extensively in the tribute economy of the the Iroquois nations. Wampum is a member of the huge amount of currencies all over the globe that were made from sea shells, including cowrie shells and dentalium. Since these were regarded as valuable by Native American tribes, they had the added advantage of being able to be traded for animal pelts bagged by the Native Americans (who soon stripped the forest bare in order to get more wampum—and hence more prestige). In 1664 Pieter Stuyvesant arranged a loan in wampum worth over 5,000 guilders for paying the wages of workers constructing the New York citadel. They were even subject to a form of counterfeiting:

The first substitute was taken over from the the Indians. From New England to Virginia in the first years of settlement, the wampum or shells used by the Indians became the accepted small coinage. In Massachusetts in 1641, it was made legal tender, subject to some limits as to the size of the transaction, at the rate of six shells to the penny.

However, within a generation or two it began to lose favor. The shells came in two denominations, black and white, the first being double the value of the second. It required by small skill and a smaller amount of dye to convert the lower denomination of currency into the higher.

Also, the acceptability of wampum depended on its being redeemed by the Indians in pelts. The Indians, in effect, were the central bankers for the wampum monetary system, and beaver pelts were the reserve currency into which the wampum could be converted. This convertibility sustained the purchasing power of the shells.

As the seventeenth century passed and settlement expanded, the beavers receded to the evermore distant forests and streams. Pelts ceased to be available; wampum ceased, accordingly, to be convertible and thus, in line with expectation, it lost in purchasing power. Soon it disappeared from circulation except as small change. (Galbraith, pp. 47-48)

Another very popular domestic currency in use was tobacco leaf. In fact, tobacco’s reign as currency in America lasted longer than gold’s:

Tobacco, although regionally more restricted, was far more important than wampum. It came into use as money in Virginia a dozen years after the first permanent settlement in Jamestown in 1607. Twenty-three years later, in 1642, it was made legal tender by the General Assembly of the colony by the interestingly inverse device of outlawing payments that called for payment in gold or silver.

The use of tobacco money survived in Virginia for nearly two centuries and in Maryland for a century and a half – in both cases until the Constitution made money solely the concern of the Federal government. The gold standard, by the common calculation, lasted from 1879 until the cancellation of the final attenuated version by Richard Nixon in 1971. Viewing the whole span of American history, tobacco, though more confined as to region, had nearly twice as long a run as gold. (Galbraith, p. 48)

And such practices might be where Adam Smith came up with his erroneous notion of primitive barter economies, which continues to plague economics and economic history to this day.

Early American Colonists Had a Cash Problem. Here’s How They Solved It (Time)

This illustrates another dictum about money: barter tends to occur in fully monetized market economies where the medium of exchange is in short supply. This is because internal exchanges in market economies take the form spot transactions among anonymous competing strangers. Anthropologists now know that pre-monetary economies were embedded in social relations and took the forms of reciprocity, redistribution, householding, and ceremonial exchange, rather than constant efforts to “truck, barter and exchange.” Anthropologists have never found an example of a barter economy anywhere in the world (e.g. “I’ll give you ten chickens for that cow”).

People in North America and other remote regions were using things like cod, tobacco, grain, brandy, and shells to settle accounts, sure—but these were fully monetized economies that just happened to have a chronic shortage of coins! To get around this, certain items which were particularly valuable because they could be traded with the outside world—like cod in Newfoundland, or tobacco in Virginia, were used to settle accounts. Or, because some items were particularly valuable inside the community, they could be used in subsequent trades as a medium of exchange (like iron nails in Scotland, another Smith example). One might include the “cigarette money” used in prisons in this category. A contemporary example is the use of spruce tips in remote Alaskan towns: spruce tips can only be harvested during a few weeks in the spring and are used in all sorts of exported products (beer, tea, soap, etc.) that are traded with the outside world.

A year after moving to Skagway, Alaska, John Sasfai walked into Skagway Brewing Co. and ordered the signature Spruce Tip Blonde Ale. But instead of pulling out his wallet, the guide for Klondike Tours put a sack of spruce tips on the bar to pay his tab. That’s because in this town, the bounty he foraged from trees near Klondike Gold Rush National Historical Park serves as a currency.

This village, with a year-round population just shy of 1,000, is notably remote – it’s about 100 miles north of Juneau and 800 miles south-east of Anchorage by car. And though stampeders established Skagway during the late-19th-Century gold rush, these days the nuggets of value are plucked from the forest, not panned or mined. While spruce tips – the buds that develop on the ends of spruce tree branches – are only good for cash at Skagway Brewing Co., bartering with spruce tips for food, firewood or coffee (which are delivered by barge once a week) is not uncommon.

The Alaska town where money grows on trees (BBC)

However, in all of Smith’s cases, prices were denominated in standard units of account, but people settled their debts in whatever was at hand. But none of these things were the origin of prices and money, as Smith incorrectly claimed.

To start, with Adam Smith’s error as to the two most generally quoted instances of the use of commodities as money in modern times, namely that of nails in a Scotch village and that of dried cod in Newfoundland, have already been exposed [as fraudulent] … and it is curious how, in the face of the evidently correct explanation … Adam Smith’s mistake has been perpetuated.

In the Scotch village the dealers sold materials and food to the nail makers, and bought from them the finished nails the value of which was charged off against the debt. The use of money was as well known to the fishers who frequented the coasts and banks of Newfoundland as it is to us, but no metal currency was used simply because it was not wanted.

In the early days of the Newfoundland fishing industry there was no permanent European population; the fishers went there for the fishing season only, and those who were not fishers were traders who bought the dried fish and sold to the fishers their daily supplies. The latter sold their catch to the traders at the market price in pounds, shillings and pence, and obtained in return a credit on their books, with which they paid for their supplies. Balances due by the traders were paid for by drafts on England or France.

A moment’s reflection shows that a staple commodity could not be used as money, because ex hypothesi, the medium of exchange is equally receivable by all members of the community. Thus if the fishers paid for their supplies in cod, the traders would equally have to pay for their cod in cod, an obvious absurdity. In both these instances in which Adam Smith believes that he has discovered a tangible currency, he has, in fact, merely found—credit.

Then again as regards the various colonial laws, making corn, tobacco, etc., receivable in payment of debt and taxes, these commodities were never a medium of exchange in the economic sense of a commodity, in terms of which the value of all other things is measured. They were to be taken at their market price in money. Nor is there, as far as I know, any warrant for the assumption usually made that the commodities thus made receivable were a general medium of exchange in any sense of the words. The laws merely put into the hands of debtors a method of liberating themselves in case of necessity, in the absence of other more usual means. But it is not to be supposed that such a necessity was of frequent occurrence, except, perhaps in country districts far from a town and without easy means of communication.

What is money? (Alfred Mitchell-Innes)

All of this experience showed colonists that multiple things could be used as money, if needed. There was no more magic to a gold standard, then to a cowrie standard, or a tobacco standard, a grain standard, or a cattle standard, or anything else for that matter. This would prove to be an instrumental lesson in the creation of paper money in the colonies.

Galbraith, for his part, gives an alternative explanation for the chronic lack of precious metals in the American colonies:

Many countries or communities had gold and silver in comparative abundance without mines. Venice, Genoa, Bruges had no Mother Lode (Nor today does Hong Kong or Singapore.) While the colonists were required to pay in hard coin for what they brought from Britain, they also had products – tobacco, pelts, ships, shipping services – for which British merchants would have been willing, and were quite free, to expend gold and silver.

Much more plausibly, the shortage of hard money in the colonies was another manifestation of Gresham. From the very beginning the colonists experimented with substitutes for metal. The substitutes, being less well regarded than gold or silver, were passed on to others and this were kept in circulation. The good gold or silver was kept by those receiving it or used for those purchases, including those in the mother country, for which the substitutes were unacceptable. (p. 47)

So the colonists were forced by economic necessity to experiment with paper money, and that’s why the United States is the cradle of rolling out this innovation. As Galbraith notes of the above cases, “None of these substitutes was important as compared with paper money.” (Galbraith p. 51).

Next: Europe rethinks money

The Origin of Paper Money 1

Where did paper money come from? That’s the question behind this article from The New Yorker: The Invention of Money. It’s a review of recent biographies of John Law and Walter Bagehot. The author concludes:

The present moment in financial invention therefore has some similarities with the period when money in the form we currently understand it—a paper currency backed by state guarantees—was first created. The hero of that origin story is the nation-state. In all good stories, the hero wants something but faces an obstacle. In the case of the nation-state, what it wants to do is wage war, and the obstacle it faces is how to pay for it.

At the same time, I’ve been reading a few popular books on monetary history. One is Jack Weatherford’s The History of Money. Weatherford, best known for his books about Genghis Khan, is eminently readable, and hits most of the major developments. However, he is clearly in the Ron Paul school of economics: gold alone is money, governments are profligate and can’t be trusted, free banking is good, central banks are bad, etc. There are also a number of basic factual errors in the book, which leads me to recommend it only if you take it as a brief survey that gets many things wrong and is a bit outdated.

Weatherford’s major reference for his chapter on paper money is John Kenneth Galbraith’s: Money, Whence It Came and Where It Went. So I decided to go directly to the source. Galbraith, a lauded economist, has a view that is much more authoritative and nuanced than Weatherford’s. Galbraith’s book concentrates mainly on the origins of banking and the modern money system, and not so much on the deep history of money in the ancient world or the Medieval period.

I’d like to take these (and others) and give an account of how the money system works today. While Modern Monetary theory is a good descriptor of how money works in nation states in the present, it often doesn’t describe how that system initially came about, and what makes it so radically different from how the money system functioned in ancient economies.

But first, I’d like to say a few brief words on why any of this matters.

Like it or not, money runs the world. If you want to understand how the world works—and how to change it—it’s important to know how the systems comprising it work. Money may seem like a boring topic (sorry!), but I would argue that no knowledge is more fundamental and useful for trying to make things marginally better. I can’t tell you how many people I’ve met who call themselves “Socially liberal but fiscally conservative.” And what do they mean by “fiscally conservative?” Nine times out of ten, it’s this: money is inherently scarce; debt is evil; and government budgets should be balanced down to the penny. You also have libertarian Bitcoin cranks, who are convinced that algorithms will save mankind once the state somehow withers away. These views are extraordinarily resistant to any kind of challenge, almost as if they were a de facto religion (in fact, they are probably even more resistant to rational analysis that most people’s religious faith!) Such people would be amenable to a more progessive message if not for the universal brainwashing about what money is, and what it does. History can provide a useful guide.

China’s False Start

All paper money all rests on the same fundamental basis: they are circulating IOU’s. The name of the creditor backing them and what’s used to securitize them changes over time, however. Sometimes it’s a particularly reputable member of the community. Sometimes it’s a king or other ruler. Sometimes it’s a democratically-elected government–or more precisely, the future anticipated revenues of that government. Sometimes it’s backed by something tangible, like silver, gold, or real estate (the most common options). Sometimes it’s not. Nowadays, sovereign money is usually backed by the government’s ability to redistribute and to impose binding liabilities on its citizens  (and, by extension, it’s monopoly on the use of legitimate force).

Paper money began where papermaking began: in China. The usual sources were hemp and mulberry bark, and printing blocks were made of wood or metal. Because of China’s strong imperial state structure, centrality, and geographic reach, it could command officially stamped pieces of paper to be accepted by its citizens as currency in lieu of precious metals. The story is told in this excellent podcast by Tim Harford on the origins of paper money: Paper Money (50 Things that Made the Modern Economy)

In Harford’s telling, paper money begins in Sichuan province, where iron coins were used rather than gold and silver in order to keep specie from leaking out of China to the hostile territories surrounding China, such as those of the Jurchen. Iron coins had holes in the middle and were carried around on cords, called cash.
The problem, as you might expect, was that these strings of heavy iron coins were extremely cumbersome. You would be turning over larger weights of coins that the weight of the things you were trying to buy: 10 pounds of coins for a five pound chicken, or something like that.

Sichuan’s iron currency suffered from serious deficiencies. The low intrinsic value of iron coins, worth no more than a tenth of the equivalent amount of bronze coin, imposed a great burden on merchants who needed to convey their purchasing capital from one place to another, and on ordinary consumers as well. A housewife would have to bring a pound and a half of iron coin to the marketplace to buy a pound of salt, and a merchant from the capital would receive ninety-one and a quarter pounds of iron coin in exchange for an ounce of silver.

Of course, the inconvenience of transporting low-value coin affected bronze currency as well. In the early ninth century, the Tang government created depositories at its capital of Chang’an where merchants could deposit bronze coin in return for promissory notes (known as feiqian, or “flying cash”) that could be redeemed in provincial capitals. “Flying cash” was especially popular among tea merchants who wished to return their profits from the sale of tea in the capital to the distant tea-growing areas of southeastern China. The Song dynasty continued this practice under the rubric of “convenient cash” (bianqian), accepting payments of gold, silver, coin, or sil in return for notes denominated in bronze coin. (The Origins of Value, pp. 67-68)

In the mid-990s, Sichuan was captured by rebels (partly angered by depreciating currency), who shut down the mint. It remained shut even after the government regained control of the province. This prompted some private merchants to issue their own paper bills to compensate for the acute shortage of coins. Such bills represented debt—the debt of the private merchant, of course. These bills soon began to circulate, and people began using them in place of iron coins, as Harford describes:

Instead of carrying around a wagonload of iron coins, a well-known and trusted merchant would write an IOU, and promise to pay his bill later when it was more convenient for everyone.

That was a simple enough idea. But then there was a twist, a kind of economic magic. These “jiaozi”, or IOUs, started to trade freely. Suppose I supply some goods to the eminently reputable Mr Zhang, and he gives me an IOU. When I go to your shop later, rather than paying you with iron coins – who does that? – I could write you an IOU.

But it might be simpler – and indeed you might prefer it – if instead I give you Mr Zhang’s IOU. After all, we both know he’s good for the money. Now you, and I, and Mr Zhang, have together created a kind of primitive paper money – it’s a promise to repay that has a marketable value of its own – and can be passed around from person to person without being redeemed.

This is very good news for Mr Zhang, because as long as people keep finding it convenient simply to pass on his IOU as a way of paying for things, Mr Zhang never actually has to stump up the iron coins. Effectively, he enjoys an interest-free loan for as long as his IOU continues to circulate. Better still, it’s a loan that he may never be asked to repay.

No wonder the Chinese authorities started to think these benefits ought to accrue to them, rather than to the likes of Mr Zhang. At first they regulated the issuance of jiaozi, but then outlawed private jiaozi and took over the whole business themselves. The official jiaozi currency was a huge hit, circulating across regions and even internationally. In fact, the jiaozi even traded at a premium, because they were so much easier to carry around than metal coins.

How Chinese mulberry bark paved the way for paper money (BBC)

Over the next ten years, these “exchange bills” became important in China’s intraregional trade, but the problem of bogus private bills issued by unscrupulous traders remained an ongoing problem for government officials. There were growing calls for government to get more involved in the circulation of bills. Enter the new prefect of Chengdu, one Zhang Yong. He issued a series of government reforms to address this problem in 1005. He:

1.) reopened Sichuan’s mints and introduced a new large iron coin that was equivalent to ten small iron coins, or two bronze coins;

2.) restricted the right to issue exchange bills to a consortium of sixteen merchant houses in Chengdu that were known to have sufficient financial resources to back the bills up, and;

3.) standardized the bills by mandating that they be issued in a specified size, color and format, using government-supplied labor and materials (although merchants could add their own watermark).

There were no standard denominations; rather, the merchants ascribed the value of the note in ink as needed. A three percent fee was charged for cashing in the notes. There was no limit on the number of bills issued. The amount of bills in circulation tended to vary with the seasons: more bills were issued in the early summer when new silk reached the market and in the fall during the rice harvest.

There were still problems with the paper currency, however, such as counterfeiting and overissuance of bills without sufficient backing. In 1024 under a new governor, Xue Tian, the government took over the issuance of jiaozi. A state-run Jiaozi Currency Bureau was established in Chengdu and given exclusive rights to issue jiaozi. The bills had the same format, but were issued in fixed denominations: one and ten guan. Most significantly, the bills had an expiration date of two years, exchangeable for fresh ones, giving the government a modicum of control over the amount issued and preventing the counterfeiting of worn or outdated bills. Also, quotas were established for the issue of the currency. Tea merchants engaged in intraregional and international trade were the most enthusiastic users of the currency, as it eliminated the need to transport heavy coins and prevented robbery by bandits (note that the needs of traveling merchants were also instrumental in the creation of Bills of Exchange issued by banks in medieval Europe centuries later).

Yet there were still problems. The government issued notes to procure military supplies from the merchants; and the ongoing costs of wars on the frontier led to their overissue. Plus, a new emperor nationalized the tea industry, meaning that the major consumers of jiaozi—the tea merchants—no longer had as much use for them. This loss of demand alongside oversupply caused a sharp depreciation in the value of the currency in the market. Instead of trading at a ten percent premium, the bills were now accepted at a ten percent discount. In 1107 the government issued a new paper currency—the qianyin—at a rate of 1:4 to the old, depreciating the earlier jiaozi bills in effort to reduce the supply.

The rest of the history of China’s bills is basically a cycle of the same thing: issuing new bills, overspending due to military needs on the frontier, rampant counterfeiting, bills depreciating, demonetizing old notes, new dynasties issuing new bills, etc. Bills were still in use in trade when Marco Polo vistied China. This is the description from the fourteenth century by the Arab Traveller ibn Battuta:

The Chinese use neither [gold] dinars nor [silver] dirhams in their commerce. All the gold and silver that comes into their country is cast by them into ingots, as we have described. Their buying and selling is carried on exclusively by means of pieces of paper, each of the size of the palm of the hand, and stamped with the sultan’s seal. Twenty-five of these pieces of paper are called a balisht, which takes the place of the dinar with us [as the unit of currency]

This demonstrates some of the essential dictums of Modern Monetary Theory.

The first is Hyman Minsky’s dictum: Anyone can create money, the secret is in getting it accepted.

The second is Felix Martin’s definition of money: Money is tradeable debt.

The other is the observation that that: The credit that is bears highest reputation is typically that of the sovereign. Gresham’s Law being what it is, this usually means that sovereign’s money will drive out all competitors, as we’ll see much later in the United States during the Civil War.

As a reminder, Gresham’s Law is this: Bad money drives out good, or perhaps, more accurately, people spend “lesser” money if they can, and hoard “greater” money for themselves.

Gresham’s Law…is perhaps the only economic law that has never been challenged, and for the reason that there has never been a serious exception. Human nature may be an infinitely variant thing. But it has it’s constants. One is that, given a choice, people keep what is best for themselves, i.e. for those whom they love the most. (Galbraith, p. 8)

A similar rationale led to the establishment of banks and banking in Northern Europe during the Age of Sail. You deposited coins and got a receipt for the amount of coins stashed in the vault. These receipts could be used to pay for things, with the value equivalent to the coins traded (in fact, the notes were more valuable, since they couldn’t be melted down or devalued).

A final interesting note: overissuance of paper currencies and lavish spending by the Yongle emperor Zhu Di (on wars, but also notably on the Chinese treasure ship voyages) led to China going back onto a silver standard just in time for the European discovery and conquest of the New World. The Chinese demand for silver is what fueled the European trade with the Far East, since the Europeans had nothing else that the Chinese wanted to exchange for goods like silks and porcelain. Without that silver standard, who knows what would have happened?

The sizable deficits incurred by Yongle’s costly foreign expeditions, including the famous maritime explorations of Admiral Zheng He and his fleet, and the emperor’s decision to relocate the Ming capital from Nanjing to Beijing were abated, albeit temporarily, by printing more money. Finally, in the 1430s, the Ming yielded to economic realities, abandoning its paper currency and capitulating to the dominance of silver in the private economy. The Ming state gradually converted its most import and sources of revenue payments in silver, while suspending emission of paper money and minting in bronze coin.

Though still uncoined, silver prevailed as the monetary standard of the Ming and subsequent Qing dynasty (1644-1911), fueled from the sixteenth century onward by the import of vast quantities of foreign silver from Japan and the Spanish colonies in the Americas. In times of fiscal crisis, such as on the eve of the fall of the Ming dynasty in 1644 and during the worldwide depression of the 1830s to 1840s, appeals to restore paper currency were renewed, but ignored. In the nineteenth century private banks, both Chinese and foreign, began to issue negotiable bills, but the weakness of the central government after its defeat in the Opium War precluded the emergence of a unified currency…Not until 1935, under the Republic of China, did China once again have a unified system of paper currency. (The Origins of Value, p. 87-89)

Although paper money first originated in China, the paper money we use today has no direct lineage with these systems. Government-issued paper money was invented independently in Western Europe, and under very different circumstances. We’ll take a look at that next time.

Despite the importance of paper money in Chinese history, the modern world system of paper money did not develop in China, or even in the Mediterranean homeland of Marco Polo or ibn-Batuta. It evolved in the trading nations around the North Atlantic. (Weatherford, p. 129)

Outrageous Historical Myths


Not that long ago, I ran across someone on Reddit espousing the old historical myth that Medieval people walked around 365 days covered in dirt and filth, and never saw the inside a bathtub their entire life.

I didn’t have the following link at the time, but I’ve bookmarked it in order to deploy it the next time someone brings up this old, ridiculous chestnut that refuses to die:

I assure you, medieval people bathed. (Going Medieval)

Although I didn’t have that link, I did post the following paragraphs from Lewis Mumford’s book, The Culture of Cities, which does an amazing job demolishing this myth with actual historical facts:

Two other matters closely connected with hygiene remain to he discussed: the bath and the drinking water supply. Even as early as the thirteenth century the private bath made its appearance: sometimes with a dressing room, as we learn from a sixteenth century Nürnberg merchant’s household book. In 1417, indeed, hot baths in private houses were specially authorized by the City of London. If anything were needed to establish the medieval attitude toward cleanliness the ritual of the public bath should be sufficient.

Bathhouses were characteristic institutions in every city, and they could he found in every quarter: complaint was even made by Guarinonius that children and young girls from ten to eighteen years of age ran shamelessly naked through the streets to the bathing establishment. Bathing was a family enjoyment. These bath-houses would sometimes be run by private individuals; more usually, perhaps, by the municipality.

In Riga as early as the thirteenth century bath-houses are mentioned, according to von Below; in the fourteenth century there were 7 such houses in Würzburg; and at the end of the Middle Ages there were 11 in Ulm, 12 in Nürnberg, 15 in Frankfurt-am-Main, 17 in Augsburg, and 29 in Wien. Frankfurt had 29 bath-house keepers as early as 1387. So widespread was bathing in the Middle Ages that the bath even spread as a custom back into the country districts, whose inhabitants had been reproached by the writers of the early Fabliaux as filthy swine. What is essentially the medieval bath lingers in the Russian or Finnish village today.

Bathing in the open, in a pool in the garden or by a stream in the summer time of course remained in practice. Public baths however were for sweating and steaming and thorough cleanliness: it was customary to take such a purging of the epidermis at least every fortnight. In time, the bath-house came to serve again as it had in Roman times; it was a place where people met for sociability, as Dürer plainly shows in one of his prints, a place where people gossiped and ate food, as well as attended to the· more serious business of being cupped for pains or inflammatory conditions. As family life in the late medieval town deteriorated, the bath-houses became the resort of loose women, looking for game, and of lecherous men, looking for sensual gratification: so that the medieval word for bath-house, namely, stew, comes down to us in English as a synonym for brothel: indeed, it is so used as early as Piers Plowman.

–LEWIS MUMFORD, The Culture of Cities, 1938 pp. 46-47

Mumford goes on to describe how all of these tropes about the Middle Ages were actually much more accurate in describing the dirty and crowded conditions in the late eighteenth and all of the nineteenth centuries—the “insensate” early industrial towns of the Industrial Revolution.

Why do these lies persist? Where did they come from? As I’ve said so often before, one of the ways we are brainwashed into believing how much better off we are today than at any other time before the present is through controlling knowledge of the past. In other words, there is clearly a political agenda behind it. No doubt the proliferation such idiocy derives in part from our abominable education system. These outrageous myths are easy to debunk with a bit of knowledge. Consider this my small contribution.

And, just as a reminder, everyone didn’t drop dead at age 30 in the past, either.

Revenge of the Luddites

Posted on Reddit is this this excellent criticism of the attitude that “technology will save us”:

Not to shit on your comment or idea but it’s very typical that an absolutely avoidable situation presents itself (burning down the Amazon) and Redditors seem to shift the conversation to some pie in the sky idea that may or may not work.

Yeah, lab grown meat is a cool idea, but cattle farming is not going anywhere anytime soon. All that needs to happen is to crack down on the dickheads burning down our rainforests. Brazilians need to stand up to their president and the international stage needs to apply pressure.

Before we get lab grown meat, nuclear fusion, mcboatyface saving the ocean, Mars colonies, CRISPR editing out cancer in the embryo, bacteria eating carbon, cops with cameras on their pistols, bitcoin-esque online ledger based voting systems, et fucking cetera, can we PLEASE just make the easy simple legislative changes that have worked in the past and don’t require educating and convincing huge swaths of people on something they never heard of?

The way you get to futuristic ideas is not by waiting until the world is almost destroyed and hoping for a bandwagon. It’s always been little steps. Brazil is burning down rainforests. Unions in the US barely exist. China is running god damn concentration camps. There is value in pragmatism and I wish more people were passionate about things that might not seem as exciting as biofuel made from protozoa or whatever.

Outstanding comment, and you can think of much of the Hipcrime Vocab (especially the old site) as an extended argument of exactly that point. It’s nice to see I’m not totally alone on this. And there’s another good comment from a user called neoliberalaltright(!!!):

That aside, the point is that discussing lab grown meat is a distraction from a real solution that exists right now, and distractions of this form are often meant to avoid confronting the idea of any sociological responsibility that you might have.

Smoking was massively reduced in the United States through collective action and education, which has reduced preventable deaths. Suppose it were the 80s, and every time someone told me “smoking causes lung cancer” I said “well researchers are working on curing lung cancer”. Would that be a reasonable response? Sure, researchers are curing lung cancer, but people can stop smoking right now. Us talking about curing lung cancer doesn’t speed up curing lung cancer, while us talking about abstaining from smoking might get people to abstain from smoking. So why focus our discussions on the former?

And on a related note: Waymo CEO says true autonomous cars will never exist. Waymo is the self-driving car development arm of Alphabet (formerly known as Google). The headline is a little deceptive, because what he’s really saying is not that self-driving cars can’t be built (they already exist), but that there are always going to be inherent limitations in the technology that will prevent the extreme utopian visions of everyone having a 100% self-driving car from ever coming to fruition:

…the industry leader in self-driving cars last year announced that truly autonomous Level 5 vehicles – those that can operate in all conditions with no human input – “will never exist”...Speaking at the Wall Street Journal’s D.Live Conference on 13 Novemeber, 2018, [Waymo’s] CEO, John Krafcik, told the audience “autonomy will always have constraints”.

In the short and medium-term, it seems likely that cars will adopt smarter versions of the technology that’s already incorporated in current cars: lane assist, emergency braking, active cruise control. Beyond that, the next step will be small sections of highway that may allow hands-off driving for suitably connected cars. However, for those that dream of having a snooze or watching a movie while the car handles the stress of the daily commute, don’t hold your breath.

And, exactly as I predicted years ago: The world’s first solar road has turned out to be a colossal failure that’s falling apart and doesn’t generate enough energy, according to a report (Business Insider)

Before I quote from that article, I just thought I would post a comment I received in the original post about “SOLAR FREAKIN’ ROADWAYS!!!” all the way back in 2014. I’ve omitted where he quotes from my original post and kindly cleaned up all the typos and misspellings (guess they don’t teach composition anymore in engineering school). I’ve also highlighted some of the most delicious parts:

You don’t have any clue what you are talking about. None whatsoever.

A. Solar panel roadways already exist, water pipes are run under roads and they are used to heat up water.

B. No one is saying redoing all the roads that way. That is what is called a strawman argument.

C. The electrical grid is old, it is old because it fucking works. A transformer properly designed can have a +100 year lifespan. almost no moving parts, few chemical reactions, low physical stresses, simple design with few parts to fail. Of course you didn’t know this because in liberal arts school they didn’t teach it.

D. Electrical lines hung above ground suffer from less loses then those underground. Putting wires underground is due to space concerns and because the cost of maintaining them exceeds the expected cost of power losses. In most new communities they are buried underground.

E. Putting panels on people’s roofs means multiple owners vs a road which involves one owner. It also means economies of scale.

F. I am not going to even respond at your crap on self-driving cars you are not in any way qualified to talk about. Go out and get a degree in EE or CS and then we shall chat.

People like you are always standing in the way of progress. Your intellectual ancestor was probably busy rambling about how fire doesn’t work and eating raw meats huddling for warmth was better. Damn Luddites

The world isn’t falling apart you just cant stand the fact that random brats aren’t consultant on important decisions so you project your own failures on the world. You walk around with a prophet complex screaming how the sky is falling instead of accepting the cold truth “the world is fine, it is my life that has failed”

But no worries you will read this and get mad but within an hour you will have forgotten and will resume your posts explaining to the world how you are right about everything and it is the world that is wrong.

And now, onto our story:

Solar roads were promised to be one of the biggest unprecedented revolutions of our time, not just in the field of renewable energy but in the energy sector generally. Covering 2,800 square meters, Normandy’s solar road was the first in the world, inaugurated in 2016, in Tourouvre-au-Perche, France.

Despite the hype surrounding solar roads, two years after this one was introduced as a trial, the project has turned out to be a colossal failure — it’s neither efficient nor profitable, according to a report by Le Monde.

The unfortunate truth is that this road is in such a poor state, it isn’t even worth repairing. Last May, a 100-meter stretch had deteriorated to such a state that it had to be demolished. According to Le Monde’s report, various components of the road don’t fit properly — panels have come loose and some of the solar panels have broken into fragments.

On top of the damage and poor wear of the road, the Normandy solar track also failed to fulfill its energy-production goals. The original aim was to produce 790 kWh each day, a quantity that could illuminate a population of between 3,000 and 5,000 inhabitants. But the rate produced stands at only about 50% of the original predicted estimates. In its second year, the energy production level of the road further dwindled and the same downward trend has been observed at the beginning of 2019, indicating serious issues with efficiency.

Even rotting leaves and thunderstorms appear to pose a risk in terms of damage to the surface of the road. What’s more, the road is very noisy, which is why the traffic limit had to be lowered to 70 kmh.

What about the solar roadways in the United States?

Another solar road suffered a similar fate in the US. There were concerns, according to Daily Caller, that as the panels wouldn’t be tilted to follow the sun and would often be covered by cars during periods when the sun was out, the whole project would be completely inefficient.

Despite costing up to roughly $6.1 million, the solar road became operational in 2016 — 75% of the panels were broken before being installed, it doesn’t generate any energy, it can’t be driven on, and 83% of its panels are broken, according to Daily Caller. One electrical engineer even went as far as describing it as a “total and epic failure” in an interview with KXLY news. Even if it had been functional, the panels would have been able to power only a small water fountain and the lights in a restroom, according to Daily Caller.

Wanna chat now, anonymous? Who doesn’t know what they’re talking about now, bitch? I guess I don’t need to get that engineering degree after all. (sorry, I just couldn’t resist!). Score one for the “damn Luddites.”

Too bad that anonymous commenter will probably never read this.

UPDATE: Here’s Lloyd Alter’s coverage on Treehugger saying pretty much the same things.

Against Against Against Billionaire Philanthrocapitalism

Slate Star Codex has recently published a full-throated defense of modern Neofeudalism.

This whole essay is ridiculous, so insipid, so misleading, so pedantic, and so maddeningly idiotic, that I just can’t help but respond to it point-by-point. It’s also so chock full of false arguments, irrelevancies, red-herrings, and straw-man arguments, that one would think that the self-proclaimed masters of “logic and reason” over there would be ashamed to publish it.

Now, if you don’t know, Slate Star Codex is big part of the whole Neoliberal online thought collective that masquerades as “officially nonpartisan enlightened centrismTM. But in this rather poorly thought-out post, the mask is ripped off for all to see. And it’s not pretty.

It’s a classic example of a prolific genre I like to call “Neoliberal contrarianism.” One of the most prodigious practitioners of this genre is Megan McArdle, who has built an entire career on it (sponsored by the usual suspects). Other notable practitioners of the genre include David Brooks, Thomas Friedman, Matt Yglesias, Kevin Drum, Tyler Cowen, Sam Harris, and many others. All of Stephen Pinker’s recent books can be considered an exercise in this genre.

So, for example, this genre will tell you why the middle class is better off today than ever before, and is, in fact, getting richer every day! Why wages are actually going up. Why it’s just a silly myth that all of the gains in the economy are going to the top ten percent of households. Why housing is actually more affordable than ever. Why high health care costs, expensive drugs and copays are actually good for us. Why student debt isn’t a big problem. Why massive transnational corporations and monopolies are the greatest thing ever! In short, why everything you see happening around you every day isn’t really happening. And they’ve got the graphs and charts to prove it!

And they’ll usually tell you all this from some exotic destination where they’ve traveled to to on holiday, because they’re citizens of the world, after all, and national borders are anachronisms for poor losers who can’t handle change. How can they afford that, you ask? Well, being a shill for Neoliberalism has it’s perks, and it beats having to work for a living.

Anyway, the post references some articles mildly critical of billionaire philanthrocapitalism. But after reading the whole article, it doesn’t really seem to address the central arguments at all. And those arguments I get primarily from Anand Giridharadas’ excellent book on the subject entitled, Winners Take All: The Elite Charade of Saving the World. The post doesn’t reference that book—nor any of the the fundamental arguments it makes—anywhere in the article, as far as I can tell.

Giridharadas characterizes the benevolence of the transnational plutocratic class as instances of “extreme taking followed by extreme giving.” What does he mean by that?

He means that the people who have anointed themselves as the “saviors” of the human race are ultimately also the same ones who are responsible for bringing the world to the brink of disaster in the first place. Here’s Giridharadas speaking about being invited to canape-filled “ideas” conferences in Aspen where the plutocrats got together to hobnob and make connections:

“They would meet these four times [a year in Aspen]. They would read Plato and Aristotle; they would also read Gandhi, and they would also read—and this was a bit of a clue—Jack Welch. And having read them, they would discuss this, and talk about how they could make more of a difference, give back, not just run their companies, but do something more.”

“I was invited into this thing—I’m obviously not a businessperson—but I was invited because they figured out that twenty businesspeople in a room is a recipe for human Ambien, so they decided [that in] every class, they would put a TV person, a writer an artist, some activist…so I was the Indian spice in my class…”

“It was a really interesting experience to have access to people that I don’t normally have access to, or talk to, or know. People who run an aviation repair business in Oklahoma—things like that. There are not people I meet in my life. And it was very interesting, and it was all about ‘We’re going to change the world’. ‘We’re going to get together and we’re going to solve the biggest problems of our time.’ ‘We’re going to fight inequality; we’re going to advance injustice [sic]’….”

“And as I got deeper and deeper into that world, it started to dawn on me…that the same people who gathered in Aspen, when you actually dug a little bit into what they did in their day jobs, were the people causing the problems they were trying to solve. They were the bankers who had caused the 2008 meltdown around the world, now talking about how to increase housing justice. They were the people who sell soft drinks to kids that foreshorten their lives and give them diabetes and all these other conditions, talking about health equity. They were the very people in Silicon Valley [who were] starting to compromise all of our privacy…[and] starting to frankly let their platforms be used as vessels for cyberwar on our electoral processes. That was happening, and they were letting it happen, because they didn’t want anything to get in the way of their growth—basically selling out democracy itself—and then they were coming to Aspen to talk about freedom.”

“And it just began to grate on me. I have to say, I was not alone. There were a bunch of people who started sitting in the back row, kind of complaining. It was all the people they shouldn’t have let in: the artists, the writers, the journalists—the mistakes. I don’t know if they do that anymore…”

Meaning Spotlight on Anand Giridharadas (YouTube)

Another point he takes aim at is the idea of “doing well by doing good.” This refers to the idea that the best way to change to world is to get rich and make a fortune. Not only that, but that the richer they get, the better off the world will be. In other words, a “win-win” situation as they like to call it. For the plutocrats change is good—so long as it doesn’t threaten their obscene wealth and profits in the slightest. This means that, no matter how much they supposedly “help”, they make damn sure that many options are off the table from the start (like paying higher tax rates or closing offshore tax loopholes, for instance). After all, since the more money they make means that the world is a better place, by the same rationale, if they make even slightly less money, then the world must therefore become a worse place for everyone! Here’s Giridharadas again, describing the “Summit at Sea”—a networking event with 3,000 entrepreneurs on a cruise ship to the Bahamas:

“They were—almost to the person—all entrepreneurs. Like, none of them worked (some worked for big companies, but that was not the majority. More of the speakers maybe worked for Apple, or things like that). Entrepreneurs, but entrepreneurs convinced that every dollar they make is making the world better by ten dollars. That they are almost these sort of Christ-like business figures, who are sacrificing by making money, and helping others on this scale. ‘It’s my cupcake company that’s going to help girls in Afghanistan.’ ‘You buy these shoes, and we will put a shoe on some other foot in some other country that you’ll never be able to verify’…and everybody on that boat shared that ideology…”

“What was so fascinating was the way in which all these things come together in this religion: making money, promoting yourself, making the world a better place. Win-win. One way to think about Winners Take All is an attempt to fire a lot of ammunition at the fraudulent idea of win-win. Of doing well by doing good. Of this idea of making the world a better place that tells rich people and corporations that nothing has to change for them to improve the state of the world. That you can somehow, in a town like this, [you can] empower workers—give them more money, whatever—without that ever coming at the expense of the people who own the companies.”

And another point he makes is, “Why do we expect that the people best equipped to solve the world’s problems are the ones who are disproportionately causing the world’s problems in the first place?” How did they cause those problems, you ask? Offshoring labor, profiting off the global race to the bottom (wage arbitrage), financialization and asset stripping, reckless speculation, hiding money in offshore accounts, dodging taxes with fancy accounting schemes, running brutal sweatshops in the Third World (or the First World if you’re Amazon), union-busting, working your employees half to death, fighting minimum wage increases, gouging consumers with usurious interest rates and bogus charges, spying on us and selling our online personal information to the highest bidder, deceitful advertising, peddling foods laden with salt, fat, sugar, and other addictive chemicals, lobbying politicians all over the globe for lax regulations and low taxes, bankrolling sock-puppet politicians, shaking down hard-up local governments for corporate welfare, lobbying, driving locally-owned business into bankruptcy, polluting the environment, overharvesting endangered natural resources, enclosing the commons—the list is almost endless, and could fill an entire book by itself. All in a day’s work.

Or, as a shorthand, we could say, Neoliberalism.

CEO compensation has grown 940% since 1978. Typical worker compensation has risen only 12% during that time (Economic Policy Institute)

So, without further ado, let’s get to it.

Points 1 & 2 fall into the “hurt billionaire feelings” category. How dare you be so “uncivil” to these lovely, benevolent plutocrats! This is sort of like the common argument, ‘We can’t expect billionaires to respect the law or pay taxes, because then they’ll just move somewhere else!’ This neglects the unfortunate fact that by allowing billionaire plutocrats to wield such disproportionate power and influence in the first place, they can hold essential functions of state hostage to their very whim. And that’s a good thing?

Here’s an example of how desperate, weak and ridiculous the arguments are right off the bat:

#1 Which got more criticism? Mark Zuckerberg giving $100 million to help low-income students? Or Mark Zuckerberg buying a $59 million dollar mansion in Lake Tahoe?

Well, presumably the former, because it’s an example of a rich plutocrat seizing power normally attributed to state and municipal governments, and coming with significant strings attached. That is, the former is an effect of billionaires deliberately inserting themselves into public policymaking and trying to shape it to their own ends and preferences. It also raises very important questions, such as why schools are so desperately revenue-starved that they need to accept handouts from “benevolent” plutocrats in the first place. It also affects the lives of thousands of American citizens.

The latter was just, well, buying stuff. People do that every day. Why would that even be newsworthy, since it doesn’t affect anyone else? (except, I assume, his immediate neighbors)

Some teachers’ unions have made corporate taxation a part of the debate over school cuts: the Saint Paul Federation of Teachers talks about the decline in taxation of Minnesota’s largest corporations (“Thirty years ago, Bancorp, EcoLab, Travelers Insurance, 3M and Target were taxed at 13.6 percent. That rate has been cut to 9.8 percent. Wells Fargo paid $15 million less in 2014 than they paid in 1990, when the tax rate was 12 percent. In 2014, 10 corporations paid $31 million less than they did in earlier periods”) and explicitly connects those tax giveaways to the budgetary shortfalls that harm the city’s kids.

It’s not enough that corporations give back some of that money in the form of charitable donations: those donations always come with strings attached, shaping curriculum and activities to the priorities of corporate benefactors, and the funding can be withdrawn any time our public schools do work that cuts against the corporate agenda.

US tax shortfalls have our public schools begging for donations (BoingBoing)

So the question is utterly nonsensical on its face. We’re getting really desperate here and we’re only on point #1.

#2 If attacks on billionaire philanthropy decrease billionaires’ donations, is that acceptable collateral damage in the fight against inequality?

In other words, how dare you criticize our benevolent plutocratic overlords—they might take their money and go home! In other words, bald-faced extortion.

Suppose Jeff Bezos is watching how people treat Bill Gates, and changes his own behavior accordingly. Maybe in the best possible world, when people attack Gates’ donations, Bezos learns that people don’t like ruthless billionaires, decides not to be ruthless like Gates was, and agrees to Bernie Sanders’ demand that he increase his employees’ pay by $4/hour. But Bezos also learns people criticize billionaires’ philanthropy especially intensely, decides not to be charitable like Gates was, and so ten million people die. You’ve just bought an extra $4/hour for warehouse workers, at the cost of ten million lives.

Wow. Just…wow.

So, if Bezos has to pay his workers a reasonable wage, then people will die???!!  How about we make them give away this money? If only society had some sort of mechanism **cough, taxes, cough** to do that. Oh well.

Doesn’t this logic just reinforce the dangers of allowing private government by whim?

So, really, it’s kind of like the following argument: if we dare criticize droit de seigneur, what happens if the lords lay down their arms and refuse to defend our kingdom? We might get raided, and someone might get get hurt. We have no choice to comply with their every dictate. Please, sire, take my betrothed’s maidenhead, with my full blessing. And let me bend the knee and kiss your ring, besides, Milord. (yes, I’m aware this rite was a myth, but the example still holds).

And by the way, you can criticize the government’s policies and priorities without the fear that the government will just up and decide to stop paying for essential services one day in a huff like Achilles quitting the battlefield to sulk in his tent. In fact, such criticism and debate is an essential part of the process. Not so, apparently, with billionaire benevolence, which is dependent on appeasing their fragile egos and a sufficient amount of grovelling. Which flows directly into the next point:

#3 How much gratitude vs. scrutiny do billionaire donors get?

This is a weird one. Here, he does some kind of Twitter search, and finds that public opinion is sometimes disproportionately hostile to these trickle-down “gifts” that come with strings attached. Rather than take that as a sign of some sort of “wisdom of the crowd”, he just sort of handwaves it off.

[As a side note, this whole notion of the so-called “wisdom of crowds” is very selectively applied by Neoliberals. When it confirms what they want it to, it clearly demonstrates the “wisdom” of the crowd, as opposed to fallible individuals. But when popular opinion goes against their Neoliberal belief system, or for socialistic ideas, then it suddenly becomes just the ignorant rabble acting “irrationally” and desperately in need of enlightenment by the “rational” Neoliberals (typically in the form of copious charts and graphs – after all, who are you going to believe, us or your own lying eyes?)]

Although some donors like Bill Gates are generally liked, others, like Zuckerberg and Bezos, are met with widespread distrust.

Besides, well, who cares? How is any of this relevant at all? I mean, at all? Again, pretty weak tea from the self-appointed supreme masters of “logic and rationality.”

Now we get to the good stuff. Here, he lists a very common argument by critics:

#4 Since billionaires have complete control over their own money, they are helping society the way they want, not the way the voters and democratically-elected-officials want. This threatens democracy. We can solve this by increasing taxes on philanthropy, so that the money billionaires might have spent on charity flows back to the public purse instead.

Well, that’s a little distorted: we’re not taxing philanthropy, we’re taxing wealth. Not sure why the misstatement here. Is it deliberate? But, anyway, all that sounds pretty reasonable. How are you going to argue against that?

Now, here’s where things really start getting pretty fucking ridiculous. As you knew would happen, he lists chapter and verse of all the good and worthy causes that benevolent billionaires have showered their (totally 100% fairly gotten) fortunes on:

Two of the billionaires whose philanthropy I most respect, Dustin Moskovitz and Cari Tuna, have done a lot of work on the criminal justice reform. The organizations they fund determined that many innocent people are languishing in jail for months because they don’t have enough money to pay bail; others are pleading guilty to crimes they didn’t commit because they have to get out of jail in time to get to work or care for their children, even if it gives them a criminal record. They funded a short-term effort to help these people afford bail, and a long-term effort to reform the bail system.

If Moskovitz and Tuna’s money instead flowed to the government, would it accomplish the same goal in some kind of more democratic, more publically-guided way? No. It would go to locking these people up, paying for more prosecutors to trick them into pleading guilty, more prison guards to abuse and harass them. The government already spends $100 billion – seven times Tuna and Moskovitz’s combined fortunes – on maintaining the carceral state each year

And where, exactly does that carceral state come from, after all? Why do we have it in the first place? Oh yeah, that’s right, to defend the property of the rich and powerful. But, aside from that, certainly the good works of these two individuals must more than  make up for the unapologetic ratfuckery perpetrated by the rest of the plutocratic billionaire class against the rest of us, no?

“Corporations that run prisons continue to protect their profit margins in less illegal and more insidious ways. These corporations stand to make more money when more people are sentenced to prison, so they work hard to influence policy and push for harsher sentencing laws.

A report from the Justice Policy Institute details how prison corporations use lobbyists, campaign contributions, and relationships with policymakers to further their own political agenda. For instance, the Corrections Corporation of America (CCA), the largest private prison company in the US, has spent $17.4 million on lobbying expenditures in the last 10 years and $1.9 million on political contributions between 2003 and 2012.

In 2013, the CCA and another major prison company, the GEO Group, also funded lobbying efforts to stop immigration reform, killing the path to legal status for over 11 million undocumented people in order to keep undocumented immigrants flowing into their facilities, as well as securing increased congressional funding to incarcerate those same people in for-profit prisons.”

Private prisons need to be made illegal. (Reddit)

But wait, there’s more!

Or take one of M&T’s other major causes, animal welfare. Until last year, California factory farms kept animals in cages so small that they could not lie down or stretch their limbs, for their entire lives. Moskovitz and Tuna funded a ballot measure which successfully banned this kind of confinement. It reduced the suffering of hundreds of millions of farm animals and is one of the biggest victories against animal cruelty in history.

If their money had gone to the government instead, would it have led to some even better democratic stakeholder-involving animal welfare victory? No. It would have joined the $20 billion – again, more than T&M’s combined fortunes – that the government spends to subsidize factory farming each year. Or it might have gone to the enforcement of ag-gag laws – laws that jail anyone who publicly reports on the conditions in factory farms (in flagrant violation of the First Amendment) because factory farms don’t want people to realize how they treat their animals, and have good enough lobbyists that they can just make the government imprison anyone who talks about it.

Highlighting opposition to ag-gag laws by a couple of Silicon Valley oligarchs is rich indeed, given that that the whole reason such laws exist in the first place is because of lobbying and corruption by wealthy agribusinesses and their socipathic billionaire allies! Somehow, I don’t think the average person is pushing for laws to prevent them from finding out how their own food is produced, do you?

How ALEC Has Undermined Food Safety By Pushing ‘Ag Gag’ Laws Across The Country (ThinkProgress)

“Ag-gag” laws — which ban the collection of evidence of wrongdoing on farms, from animal cruelty to food-safety violations — are a sterling example of how monopolism perpetuates itself by taking over the political process.

As American agribusiness has grown ever-more concentrated — while antitrust regulators looked the other way, embracing the Reagan-era doctrine of only punishing monopolies for raising prices and permitting every other kind of monopolistic abuse — it has been able to collude, joining industry groups like ALEC, the American Legislative Exchange Council, which drafts industry-favoring “model legislation” and then lobbies state legislatures to adopt it.

Court strikes down Iowa’s unconstitutional ag-gag law (BoingBoing)

And, of course, the underlying reason why monopoly laws have been abandoned, and why businesses all across industries have become increasingly concentrated, has a lot to do with the plutocrats’ wholesale purchase of the economics profession which has led to the pushing of Neoliberal and “Chicago School” policies via an unfathomably large constellation of universities, think-tanks, journals, publishing houses, magazines, online resources, etc., etc.

Kind of overwhelms all that philanthropy, doesn’t it?

Forgive me if I’m less than persuaded by this example of a couple of Facebook billionaires (and let’s not even get into how fucking sinister Facebook is). Help with one hand, hurt with the other. Or, as Giridharadas put it, extreme taking followed by extreme giving.

Here’s another howler:

George Soros donated/invested $500 million to help migrants and refugees. If he had given it to the government instead, would it have gone to some more grassroots migrant-helping effort?

No. It would have gone to building a border wall, building more camps to lock up migrants, more cages to separate refugee children from their families. Maybe some tiny trickle, a fraction of a percent, would have gone to a publicly-funded pro-refugee effort, but not nearly as much as would have gone to hurting refugees.

And how exactly did Trump come to power in the first place? Could it be millions and millions of dollars of Dark Money spent by plutocrats—the Mercer family in particular—as exhaustively documented in Jane Mayer’s indispensable and important book, Dark Money: The Hidden History of the Billionaires Behind the Rise of the Radical Right? Are these the policies of government, or rather the policies of one particularly heinous administration, one that has been installed and consistently backed by sociopathic members of the billionaire elite class since day one (such as FOX News, Sinclair broadcasting, Cambridge Analytica, et. al.)?

The Reclusive Hedge-Fund Tycoon Behind the Trump Presidency (Jane Mayer, The New Yorker)

It’s the oldest trick in the book: elect horrible Republicans who do horrible things and then use it as proof as to just how horrible the government is. The solution? Private charity, of course!

So, the best system of government, according to SSC, is one in which the few “good” billionaires spend their money on defeating the laws written by, and for the benefit of, the other set of “evil” billionaires” who manipulate and control our government? So the “good” billionaires make up for the “evil” ones? It that seriously the argument here? Are you f*#king kidding me???

And this is supposed to be the ultra-rational “reason and logic” crowd. Apparently not when it comes to defending Neoliberalism. The causes these “good” billionaires are dedicated to fighting are all the ruinous consequences of the policies favored by the rest of the billionaire class who control the damn government in the first place! But let’s move on.

#5…but the US government is not a charity. Even when it’s doing good things, it’s not efficiently allocating its money according to some concept of what does the most good.

No, the U.S. government is not a charity, because it has to, you know, actually govern the fucking country! That’s kind of important, after all. It has a lot of things it must allocate money to (what’s called non-discretionary spending). That’s simply the nature of government—every government in the world.

Nevertheless, allocating more money to health and education would certainly do a lot of good, wouldn’t it? And what’s stopping that, I wonder? Hmmmm…

Oh yeah, I remember now: HOWYAGUNNAPAYFORIT? The one, single, magical word, the all-powerful incantation perennially invoked by the plutocrats and their media lackeys that assures that the government cannot, and will never, ever, be able to adequately address the pressing problems facing the American people today. And where, I wonder, does this ubiquitous phrase originate? Outer space? The American people themselves? After all, no one seems to be asking that of the private charities we’ve been discussing. No, government alone seems to be under that restriction (and only in areas that don’t directly benefit the plutocrats’ bottom line).

No, I have a sneaking suspicion that it ultimately originated from those same “benevolent” Neoliberal billionaire overlords who are getting their dicks sucked by this SSC essay.

Bill Gates saved ten million lives by asking a lot of smart people what causes were most important. They said it was global health and development causes like treating malaria and tuberculosis. So Gates allocated most of his fortune to those causes. Gates and people like him are such a large fraction of philanthropic billionaires that by my calculations these causes get about 25% of billionaire philanthropic spending.

The US government also does some great work in those areas. But it spends about 0.9% of its budget on them. As a result, one dollar given to a billionaire foundation is more likely to go to a very poor person than the same dollar given to the US government, and much more likely to help that person in some transformative way like saving their life or lifting them out of poverty. But this is still too kind to the US government. It’s understandable that they may want to focus on highways in Iowa instead of epidemics in Sudan

Yes it is understandable, because the people of the United States presumably elect representatives to the government of the United States to solve problems faced by the citizens of the United States, and not those faced by Sudan. Presumably, the people who live in Sudan elect representatives to deal the problems faced by Sudan. But, remember, in Neoliberal world, nation-states are so passé.

I mean, can you get more stupid than this? Here’s what really stopped that spending: extreme taking:

We had a once-in-a-generation opportunity to advance universal health care, benefitting many millions of uninsured Americans, saving lives, staving off bankruptcies, and indeed saving public dollars that would otherwise be devoted to emergency-room care. We had a means of helping to pay for it by a slight alteration in a tax break used by the most well-off—and, undoubtedly, the most generously insured—members of society. Yet the collective leadership of American philanthropy—a leadership, by the way, that had been with few exceptions silent about the redistribution of wealth upward through the Bush tax cuts, silent about cuts in social programs, silent about the billions of dollars spent on the wars of the last decade—found its voice only when its tax exemption was threatened, and preferred to let the government go begging for revenue elsewhere, jeopardizing the prospects for health-care reform, in order to let rich, well-insured people go on shielding as much of their money as possible from taxation.

… What that situation made plain to me was not just that philanthropy is quite capable of acting like agribusiness, oil, banks, or any other special-interest pleader when it thinks its interests are jeopardized. It helped me to see that however many well-intentioned and high-minded impulses animate philanthropy, the favorable tax treatment that supports it is a form of privatization. Money that would otherwise be available for tax revenue that could be democratically directed is shielded from public control for private use.

Democracy and the Donor Class (Democracy Journal)

…Yet even on issues vital for the safety of the American people, the government tends to fail in surprising ways. How much money does the US government spend fighting climate change?

Well, presumably not as much as it could be spending, given that large numbers of corporations are spending staggering amounts of cash to prevent the Green New Deal sponsored by Democrats from ever becoming law. But never mind that salient fact, since SSC is a Neoliberal site, this just gives it some more ammunition to bash the “incompetent” government. And why is government spending so low?

The Green New Deal is a loose set of ambitious goals outlined in a nonbinding resolution that calls for a global goal of achieving net zero carbon emissions by 2050 — but no policy specifics on how to get there. It is also an economic plan, which calls for massive federal investment, enhancing the social safety net, and millions of new jobs to overhaul the energy and infrastructure industries in the U.S

Senate Majority Leader Mitch McConnell, R-Ky., announced last month that he would put the resolution authored by New York Democratic freshman Rep. Alexandria Ocasio-Cortez and Sen. Ed Markey, D-Mass., up for a vote. Republicans are trying to elevate the freshman lawmaker, who has described herself as a democratic socialist, and her ideas as emblematic of the Democratic Party going into 2020.

“In recent months our nation has watched the Democratic Party take a sharp and abrupt left turn toward socialism,” McConnell said earlier this month. “A flawed ideology that has been rejected time and again across the world is now driving the marquee policy proposals of the new House Democratic majority, and nothing encapsulates this as clearly as the huge, self-inflicted, national wound the Democrats are agitating for called the Green New Deal.”

The National Republican Senatorial Committee has also started using Ocasio-Cortez in attack ads similar to the way the party campaigns have run against House Speaker Nancy Pelosi, D-Calif., for years. In a recent tweet attacking Rep. Joaquin Castro, D-Texas, who is considering a run against Republican Sen. John Cornyn, the NRSC said Castro “votes with AOC 94% of the time.” Castro is a co-sponsor of the Green New Deal. House Republican candidates are also using Ocasio-Cortez and the Green New Deal in attack ads, like this one released Monday by former Rep. Karen Handel, R-Ga., who lost in 2018 and is seeking a rematch for a suburban Atlanta district.

https://www.npr.org/2019/03/26/705897344/green-new-deal-vote-sets-up-climate-change-as-key-2020-issue

But it’s “government” (and NOT Republicans, mind you) that is bad. Riiight….As SSC points out:

In 2017, the foundation of billionaire William Hewlett (think Hewlett-Packard) pledged $600 million to fight climate change. One gift by one guy was almost twice the entire US federal government’s yearly spending on climate issues.

Gee, I wonder why that might be? SSC is gnomically silent. I guess government is just “bad”, amirite? It can’t possibly have anything to do with the bottomless pits of money fighting against any kind of environmental regulations, could it? And where, pray tell, might all that money be coming from? China? The moon? Martians?

I wish I could give a more detailed breakdown of how philanthropists vs. the government spend their money, but I can’t find the data. Considerations like the above make me think that philanthropists in general are better at focusing on the most important causes.

Of course they make you think that, because that’s the foregone conclusion you were heading to all along.

How government spends its (discretionary) money is theoretically decided by the American people themselves. But we’ve seen time and time again that the preferences of the average voter don’t matter one whit; only those of the donor class do. The very same donor class giving away all this wonderful charity money to poor people in Sudan, or helping animals, or whatever.

And, by the way, I’m sure SSC is taking into account how many people are saved from poverty by Social Security, and how many seniors are alive today because of Medicare, and so forth when it does it’s accounting of “ineffective” government versus “effective”private charities (that’s sarcasm by the way, folks).

#6 I realize there’s some very weak sense in which the US government represents me. But it’s really weak. Really, really weak. When I turn on the news and see the latest from the US government, I rarely find myself thinking “Ah, yes, I see they’re representing me very well today.”

Yet more Neoliberal government-bashing. Are you sensing a pattern here?

Well, he’s not alone—a lot of people think that, after all. But, once again, I’m left wondering, why on earth might that be??? Once again, SSC is mysteriously silent on this issue. Government must just inherently be “bad” and “ineffective” like the Neoliberals have been constantly telling us all along, right? Right???

New Data Shows Donor Class Does Not Accurately Represent Diversity and Policy Views of American Voters (Demos)

Political donors in the US are whiter, wealthier, and more conservative than voters (Vox)

Who really matters in our democracy — the general public, or wealthy elites? That’s the topic of a new study by political scientists Martin Gilens of Princeton and Benjamin Page of Northwestern. The study’s been getting lots of attention, because the authors conclude, basically, that the US is a corrupt oligarchy where ordinary voters barely matter…

Multivariate analysis indicates that economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence. The results provide substantial support for theories of Economic-Elite Domination and for theories of Biased Pluralism, but not for theories of Majoritarian Electoral Democracy or Majoritarian Pluralism.

America functions as an oligarchy, not as a democracy (TYWKIWDBI)

“Economic elites and organized groups representing business,” eh? You mean, those same folks that SSC is busy bootlicking because of all the oats that are coming out of their asses to feed the hungry sparrows? Those guys?

Bill Gates has an approval rating of 76%, literally higher than God. Even Mark Zuckerberg has an approval rating of 24%, below God but still well above Congress. In a Georgetown university survey, the US public stated they had more confidence in philanthropy than in Congress, the court system, state governments, or local governments; Democrats (though not Republicans) also preferred philanthropy to the executive branch.

Okay, so earlier we dismissed popular opinion on Twitter; now we’re using popular polls to boost our case. Facts and logic!

Besides, what does the popularity of billionaire plutocrats, who have massive PR organizations at their disposal, matter at all? And how much can such polls be trusted? After all, who owns the media? Oh, yeah, that’s right, the plutocrats themselves!! (BTW that Bill Gates is more popular than God ought to scare the shit out of anyone, even Neoliberals).

These 15 Billionaires Own America’s News Media Companies (Forbes)

Also, given that Big Business and sociopathic plutocrats have been waging an unremitting, fifty-year+ total war on “Big Government” using every resource available to them, I wonder if that might influence those poll numbers. But, in SSC’s world, that doesn’t exist, apparently.

When I see philanthropists try to save lives and cure diseases, I feel like there’s someone powerful out there who shares my values and represents me. Even when Elon Musk spends his money on awesome rockets, I feel that way, because there’s a part of me that would totally fritter away any fortune I got on awesome rockets. I’ve never gotten that feeling when I watch Congress. When I watch Congress, I feel a scary unbridgeable gulf between me and anybody who matters. And the polls suggest a lot of people agree with me.

It speaks volumes about Slate Star Codex (and the whole essay in general) that he sees people like Elon Musk and Jeff Bezos “representing him” when they “fritter away” billions of dollars on rocketships to Mars for themselves and their 1% pals. I could practically end the essay right here and now. As for me, I don’t feel that way; I feel exactly the same as Gil Scott-Heron in Whitey on the Moon. And while I’m guessing the average SSC reader is firmly ensconced in the former camp, seeing themselves as being on the winning side of the billionaries’ velvet rope, I’m willing to bet statistically that the majority of people feel more like I do (as indeed they should).

And, as a matter of fact, I do feel that politicians like Bernie Sanders, Elizabeth Warren, and AOC represent me (even if they don’t literally represent me since I don’t reside in their states), moreso than Bill Gates, Elon Musk, Jeff Bezos or Mark Zuckerberg (whom I can’t vote for, either). I wish we had more politicians  like them. Note, also, that none of those politicians above are billionaires, or are funded by billionaire sugar daddies.

#7 Shouldn’t people who disagree with the government’s priorities fight to change the government, not go off and do their own thing?

Well, the plutocrats have already spent countless billions of dollars changing the government—they just changed it for their own advantage, and to the detriment of everybody else. They’re also spending billions of dollars to make sure it stays that way.

The money spent on lobbying is conspicuously absent from this article. Extreme taking followed by extreme giving. But the taking part is never mentioned. It’s like it doesn’t exist.

Then, SSC launches into some bizarre analogy between the democratically-elected U.S. government and the Church of Scientology (?) that makes absolutely no fucking sense whatsoever. They’re really grasping at straws here. I guess “facts and logic” don’t matter so much after all when you’re slinging the shit for Neoliberalism. Seriously, go there and read for yourself just how bizarre this is.

Also, do you realize how monumental a task “reform the government” is? There are thousands of well-funded organizations full of highly-talented people trying to reform the government at any given moment, and they’re all locked in a tug-of-war death match reminiscent of that one church in Jerusalem where nobody has been able to remove a ladder for three hundred years

“Do I realize how monumental a task ‘reforming the government’ is?”

Well, no I don’t, but I know some folks who do. Their names are Charles and David Koch, and they know exactly what it takes to “reform” the government, since they’ve doing exactly that over the last forty-odd years, and they’ve largely succeeded in their task. And there are many more like them: the Cato Institute, the Heritage Foundation, ALEC, The Federalist Society, etc.—far too many to name or count. We’ve encountered quite a few of them already.

And if you don’t think that the government has been “reformed”—exclusively for the benefit of the Chamber of Commerce and the investor/ownership class, mind you—then you are clearly a simpering idiot, and no one should pay any attention to anything you have to say ever again.  The “ladder” has indeed moved, just for the benefit of certain people, exclusively.

Incidentally, another guy who has some idea of what it takes is named Bernie Sanders. Why, I wonder, have these benevolent billionaires not donated one solitary cent to him (but have donated to more status-quo-favoring Democratic candidates and Republicans – mostly to Republicans). In fact, not only have they not donated anything to him, they are almost unanimous in their opposition to his very candidacy, as Bernie himself has proudly acknowledged.

9. Does billionaire philanthropy threaten pluralism?

I really don’t understand this one. This isn’t really a common argument against depriving the government of revenue in favor of private charity with strings attached; SSC just seems to include it for no other reason that to include an argument which can be easily dismissed. Sort of “Washington Generals” argument, I guess. It does give us this gem, however:

The Multidisciplinary Association for Psychedelic Studies (MAPS) sponsors research into mental health uses of psychedelic drugs. You might have heard of them in the context of their study of MDMA (Ecstasy) for PTSD being “astoundingly” successful. They’re on track to get MDMA FDA-approved and potentially inaugurate a new era in psychiatry. This is one of those 1000x opportunities that effective altruists dream of. The government hasn’t given this a drop of funding, because its official position is that Drugs Are Bad.

Wow, using psychedelic research to justify private charity? That’s some next-level chutzpah right there! That’s killing your parents and begging the court for clemency because you’re an orphan.

Again, why, exactly, does the government (in the U.S.) believe that “Drugs are Bad?” SSC doesn’t say. Certainly the American people themselves don’t believe that, especially not with efforts towards decriminalization and legalization taking place all over the country (not to mention the enthusiastic drug use by citizens themselves!). So who exactly does believe that?

Well, we know that psychedelics were legal at one point. We also know that Nixon administration officials have since freely admitted that they were criminalized expressly to go after and eviscerate the anti-war movement and civil rights campaigns. And the Nixon administration was hardly an enemy of the plutocratic class; rather, he was following the dictates of the Powell Memorandum almost to a tee.

And now that bald, naked attempt at smashing the Left in this country is being used as a rationale for starving the government of funds and relying on the charity of unaccountable billionaire plutocrats? As I said, next-level chutzpah!

Or: in 2001, under pressure from Christian conservatives, President Bush banned federal funding for stem cell research. Stem cell scientists began leaving the US or going into other area of work. The field survived thanks to billionaires stepping up to provide the support the government wouldn’t…

This one is even more outrageous. Unlike the Drug War, where the blame can be spread around between both parties, this one is exclusively a product of one single branch of one single party: the radical Fundamentalist Evangelical Republicans. Ironically the same ones who reliably run on a platform of how “ineffective” the government is, how taxes are “theft,” how we need to “cut spending”, how poor people are “lazy,” etc., etc.

Or: despite controversy over “government funding of Planned Parenthood”, political considerations have seriously limited the amount of funding the US government can give contraceptive research. It was multimillionaire heiress Katharine McCormick who funded the research into what would become the first combined oral contraceptive pill.

Ummm, are you seeing a pattern here, folks? Because SSC sure doesn’t.

Aren’t these really just arguments for the Republican Party being banished from ever holding the levers of power at any time in this country?

This point is partly addressed by the next bullet point:

9. Aren’t the failures of government just due to Donald Trump or people like him? Won’t they hopefully get better soon?

Sounds like a good argument. What could be wrong with this one?

My whole point is that if you force everyone to centralize all money and power into one giant organization with a single point of failure, then when that single point of failure fails, you’re really screwed.

Remember that when people say decisions should be made through democratic institutions, in practice that often means the decisions get made by Donald Trump, who was democratically elected…

“Democratically elected?” Er, no he wasn’t. He won because of the Electoral College. Only in the most pettifogging sense could he be considered “democratically elected.” And, thanks to gerrymandering and the concentration of the population into urban areas, less and less of our representatives are being “democratically elected” with every passing year. And let’s not even get into the fact of how much the election was influenced by foreign interference.

Also, the government isn’t a “single point of failure.” There are fifty state governments, plus D.C., plus United States territories. Every single one of them is being bled dry of necessary revenue because of the actions of venal billionaire plutocrats and legalized bribery. How many cities have built brand-new sparkling sports venues for privately-owned sports teams, even while cutting budgets for university systems, as was done here in Wisconsin? How much money has been shilled out as corporate welfare to private corporations, such as Foxconn (also here in Wisconsin). And how can we forget Jeff Bezos infamously playing cities against each other in order to get the biggest taxpayer-funded bonanza to secure his shiny new headquarters.

Shit like that is exactly what we’re taking about when we talk about the “extreme taking” part of the equation.

In fact, the actions of politically-active plutocrats like the Koch Brothers are concentrated even more intensely at the state level than at the Federal level. And they’re hardly absent from municipal politics either, fighting against widely-supported initiatives like minimum wage increases and mandatory sick leave which would benefit literally tens of thousands of struggling American citizens all over the country (sorry Sudan—you’re on your own).

But, hey, I’m sure that donation to the symphony will make up for it, eh?

Besides, even if the Federal government were a so-called “single point of failure” (which disproportionately tends to fail when Republicans are in charge), it also has vastly more resources to alleviate poverty and solve big problems than any private charity. And that includes a license to print money, if only we would let it. Er, I mean, if only they would let us.

In fact, we can do whatever we want. Money doesn’t grow on rich people.

This point is even made by SSC in the essay itself:

8. The yearly federal budget is $4 trillion. The yearly billionaire philanthropy budget is about $10 billion, 400 times smaller.

For context, the California government recently admitted that its high-speed rail project was going to be $40 billion over budget (it may also never get built). The cost overruns alone on a single state government project equal four years of all the charity spending by all the billionaires in the country.

Compared to government spending, Big Philanthropy is a rounding error. If the whole field were taxed completely out of existence, all its money wouldn’t serve to cover the cost overruns on a single train line.

So, charity spending by plutocrats is both more effective than taxes, and also insignificant. Which is it? (also, notice the subtle Neoliberal swipe at “wasteful” government spending on infrastructure. Classy!).

To a large extent, I would be far less hostile to efforts of private charity if they didn’t occur simultaneously with the constant, unremitting message pouring out from the billionaire class and their bought-and-paid-for corporate media shills that the United States is “broke” and cannot afford to pay for basic things like universal single-payer healthcare, free higher education, decarbonizing our energy infrastructure, or about a million other essential things that we’re told are “utopian” and simply “unaffordable” in a country that has more millionaires and billionaires than any other country on earth. But I don’t see that stopping anytime soon—in fact, it’s intensifying. Again, it bears repeating that the politicians who support these things (Sanders, AOC, et. al.) are vigorously opposed by this supposedly “benevolent” plutocrat class:

We could start with the 16 negative stories the [Washington] Post ran in 16 hours, and follow that up with the four different Sanders-bashing pieces the paper put out in seven hours based on a single think tank study.

Or you could take the many occasions on which the Post‘s factchecking team performed impressive contortions to interpret Sander’s fact-based statements as meriting multiple “Pinocchios”. In particular, we might observe the time the Post “factchecked” Sanders’ claim that the world’s six wealthiest people are worth as much as half the global population. It just so happens that one of those six multi-billionaires is [Jeff] Bezos, which would make an ethical journalist extra careful not to show favoritism.

Instead, after acknowledging that Sanders was, in fact, correct, the paper’s Nicole Lewis awarded him “three Pinocchios”—a rating that indicates “significant factual error and/or obvious contradictions.” This is because, the paper explained, even though the number comes from a reputable nonpartisan source, Oxfam, which got its data from Credit Suisse, “It’s hard to make heads or tails of what wealth actually means, with respect to people’s daily lives around the globe.”

Here’s the Evidence Corporate Media Say Is Missing of WaPo Bias Against Sanders (FAIR)

Now, for the big conclusion, which is just as insipid as the rest of the post.

So you’re saying these considerations about pluralism and representation and so on justify billionaire philanthropy?

Is he saying that? After all that, I still can’t tell.

The Gates Foundation plausibly saved ten million lives. Moskovitz and Tuna saved a hundred million animals from excruciatingly painful conditions. Norman Borlaug’s agricultural research (supported by the Ford Foundation and the Rockefeller Foundation) plausibly saved one billion people.

That’s nice, but totally irrelevant to the point: extreme taking followed by extreme giving. It only looks at half of that equation, and totally ignores the other half, thus becoming a straw man argument.

How many people have died over the past half-century because of draconian debt repayments demanded by banks from indebted third-world countries? And who is responsible for that?

Does one cancel out the other? Do two wrongs make a right?

Besides, we could sling around factoids all day and sill not prove anything. How many lives have improved building codes (aka “evil” regulations) saved? Fifty? A hundred? A thousand? How about efficient municipal sanitation? Last time I checked, Flint still has lead in its drinking water. How about legally-required seat belts, which were fought against for years by big business (along with smoking prevention)?

Billionaire charity is filling a vacuum that should not be there in the first place.

I always have mixed feelings about the idea that news of the type “[Billionaire] donates to solve [horrific problem that should have been solved eons ago by officials]” because people should not be dependent on the generosity of the ultra-rich for basic human needs to be met. It reminds me of the time a local news program did a story on a girl who was selling ribbons and baked goods to raise money for cancer treatment that she needed and framed it as uplifting – “Look at this girl go!” That’s not uplifting, it’s a national disgrace that that girl receiving life saving medical treatment was dependent on how much she could fundraise.

Billionaire CEO makes $480,000 donation to Flint Community Schools for new water filtration systems (Reddit)

These accomplishments – and other similar victories over famine, disease, and misery – are plausibly the best things that have happened in the past century. All the hot-button issues we usually care about pale before them. Think of how valuable one person’s life is – a friend, a family member, yourself – then try multiplying that by ten million or a billion or whatever, it doesn’t matter, our minds can’t represent those kinds of quantities anyway. Anything that makes these kinds of victories even a little less likely would be a disaster for human welfare.

Agreed. Oh, and by the way, depriving governments of the necessary resources to save lives and improve the welfare of its citizens, and blocking desperately-needed social reforms that might slightly threaten profits, also makes these kinds of victories (more than) just a little less likely as well. And I also happen to believe that donating modest (for them) sums to the charities of their choice does not make up for the unrepentant  ratfuckery and skullduggery perpetrated by the one-percent billionaire elite class all over the world against the rest of us since the rise of Neoliberalism.

The researchers found that states that expanded Medicaid saw higher rates of enrollment and lower rates of uninsurance. Among the 55- to 64-year-olds studied, researchers found, receiving Medicaid “reduced the probability of mortality over a 16 month period by about 1.6 percentage points, or a decline of 70 percent.” Based on their findings, they estimate that states’ refusal to expand the program led to 15,600 additional deaths.

This is in line with a growing body of research that shows Medicaid expansion has not only vastly increased access to health insurance, but also improved health outcomes. About 13.6 million adults gained Medicaid coverage under Obamacare.

Study: the US could have averted about 15,600 deaths if every state expanded Medicaid (Vox)

In 2017, the Royal Society of Medicine said that government austerity decisions in health and social care were likely to have resulted in 30,000 deaths in England and Wales in 2015The rate of increase in life expectancy in England nearly halved between 2010 and 2017, according to research by epidemiology professor Michael Marmot. He commented that it was “entirely possible” that austerity was the cause and said: “If we don’t spend appropriately on social care, if we don’t spend appropriately on health care, the quality of life will get worse for older people and maybe the length of life, too.”

A paper released by the British Medical Journal in November 2017 estimated that the government austerity programme caused around 120,000 excess deaths since 2010. By 2018 figures from the Office for National Statistics (ONS) were showing a fall in life expectancy for those in poorer socioeconomic groups and those living in deprived areas, while average UK life expectancy had stopped improving. Public Health England was asked to carry out a review of life expectancy trends but government ministers said that the arguments put forward by some academics, that austerity had contributed to the change, could not be proved. ONS figures published in 2018 indicated that the slowdown in general life expectancy increase was one of the highest among a group of 20 of the world’s leading economies.

United Kingdom government austerity programme (Wikipedia)

Neoliberalism kills. Extreme taking followed by extreme giving, indeed. I wonder if SSC has an equal space in their heart for the people collapsing on the floors of Amazon warehouses from heat stroke and urinating in trash cans.

Probably not. Might reduce Bezos’ donations.

The main argument against against billionaire philanthropy is that the lives and welfare of millions of the neediest people matter more than whatever point you can make by risking them. Criticize the existence of billionaires in general, criticize billionaires’ spending on yachts or mansions. But if you only criticize billionaires when they’re trying to save lives, you risk collateral damage to everything we care about.

Well, I’m criticizing the billionaires for a hell of a lot more than that.

Here’s the thing: The argument is not—repeat NOT—that the wealthy shouldn’t donate some of their money to worthy causes. It never has been. That’s a straw man.

It’s criticizing the culture of extreme taking followed by extreme giving.

The argument is that they do this whilst at the same time as bending governments their will, dictating policy, blocking any kind of social reform, abusing and treating their own workers like human garbage, and spending unlimited funds blocking badly needed social reforms that would slightly inconvenience them or reduce their ungodly profits by even negligible amounts.

It’s also asserting that the cost of billionaires assuming the power to alter our world does, indeed, come at the expense of other equally pressing social needs.

It reminds me of the whole discussion surrounding golden rice. If you opposed handing poor farmers this genetically-modified rice produced by agribusiness corporations, you were a sociopathic monster who wanted children to go blind. But if you wanted to alter the economic system so that farmers could actually afford to purchase a variety of foods to ensure adequate nutrition—or even grow their own vitamin rich foods—well, then, you were a pie-in-the-sky utopian who didn’t understand economics.

To which I replied, if that’s the case well, then, fuck economics. Who is the real monster here???

The way I see it, the argument that private charity is “superior” than government at solving pressing social problems rests entirely on the fact that Big Business has gutted and undermined democratic governments around the world for at least the past fifty years.

They then turn around and use the subsequent failures of government as a justification for seizing ever-more of the commons for themselves and their corrupt, sheltered offspring.

And that, my friends, is the primrose path to Neofeudalism in a nutshell. It’s the Road to Serfdom, except this one is real and it’s happening right now, in front of our very eyes, not due to too much democracy, but too little.

I guess I have to repeat this over and over again until it sinks in: The plutocrats fund Trojan Horse candidates who undermine the viability of democratic governments at every opportunity, and have done for at least the last half-century. They then use the resulting “failures” and “ineffectiveness” of government as an argument and an excuse to hand them ever more power and control over society and its limited resources. Power which is accountable to no one. Resources which are theirs, and theirs alone. And this puerile, blatantly-biased, pathetically-reasoned joke of an essay by SSC is entirely in that vein. And it should put to rest any doubts that SSC isn’t an expressly political project designed to benefit the One Percent elites and catapult pure Neoliberal propaganda under the guise of “reason” and “enlightened centrismTM.”

SSC’s whole argument here basically boils down to this: better be kind to the billionaires, because it sure would be shame if anything happened to deprive those poor, suffering recipients of their largesse. I mean, you wouldn’t actually want to make this political, would you?

Basically a Mob shakedown. “Nice place you got here. Sure would be shame if anything happened to it. I and my associates can make sure that such an unfortunate thing doesn’t happen. Oh, and be sure to kiss my ring when you hand over the cash.”

And this is the best the vaunted “enlightened centrist” Neoliberal “fact and logic” crowd can do? The Neoliberals are seeing a global rebellion against their failed ideas everywhere they turn, and are getting increasingly scared and desperate. This is clearly a sign of that.

But, hey, at least their book reviews are good. Check out the latest on one of my personal favorites, Secular Cycles. The one of The Secret of Our Success is good too.

The Origin of Religion – Part 3

[Blogger note: I have apparently lost my USB drive, which contained all of my subsequent blog posts, thus, I’ll have to cut this short. I’ll try and finish this up using my recollection and some snippets lift on my hard drive]

In addition to what we spoke of before, there are several other “alternative” psychological ideas behind the origin and development of religion that the BBC article does not mention. Nonetheless, I feel these ideas are too important to be left out of the discussion. What follows is my summary below.

Terror Management Theory

Terror Management Theory (TMT) stems from a book written by psychotherapist Ernest Becker in 1973 called The Denial of Death. In it, he asserted that we invest in what are, in essence, “immortality projects” in order to stave off the subconscious fear of our own inevitable demise.This tendency is not exclusive to religions, but is also applicable to all sorts of other secular philosophies and behaviors.

The introduction to Becker’s book online provides a good summary:

Becker’s philosophy as it emerges in Denial of Death and Escape from Evil is a braid woven from four strands.

The first strand. The world is terrifying…Mother Nature is a brutal bitch, red in tooth and claw, who destroys what she creates. We live, he says, in a creation in which the routine activity for organisms is “tearing others apart with teeth of all types — biting, grinding flesh, plant stalks, bones between molars, pushing the pulp greedily down the gullet with delight, incorporating its essence into one’s own organization, and then excreting with foul stench and gasses the residue.”

The second strand. The basic motivation for human behavior is our biological need to control our basic anxiety, to deny the terror of death. Human beings are naturally anxious because we are ultimately helpless and abandoned in a world where we are fated to die. “This is the terror: to have emerged from nothing, to have a name, consciousness of self, deep inner feelings, an excruciating inner yearning for life and self-expression — and with all this yet to die.”

The third strand. Since the terror of death is so overwhelming we conspire to keep it unconscious. “The vital lie of character” is the first line of defense that protects us from the painful awareness of our helplessness. Every child borrows power from adults and creates a personality by introjecting the qualities of the godlike being. If I am like my all-powerful father I will not die. So long as we stay obediently within the defense mechanisms of our personality…we feel safe and are able to pretend that the world is manageable. But the price we pay is high. We repress our bodies to purchase a soul that time cannot destroy; we sacrifice pleasure to buy immortality; we encapsulate ourselves to avoid death. And life escapes us while we huddle within the defended fortress of character.

Society provides the second line of defense against our natural impotence by creating a hero system that allows us to believe that we transcend death by participating in something of lasting worth. We achieve ersatz immortality by sacrificing ourselves to conquer an empire, to build a temple, to write a book, to establish a family, to accumulate a fortune, to further progress and prosperity, to create an information-society and global free market. Since the main task of human life is to become heroic and transcend death, every culture must provide its members with an intricate symbolic system that is covertly religious. This means that ideological conflicts between cultures are essentially battles between immortality projects, holy wars.

Here’s Becker himself:

…of course, religion solves the problem of death, which no living individuals can solve, no matter how they would support us. Religion, then, gives the possibility of heroic victory in freedom and solves the problem of human dignity at its highest level. The two ontological motives of the human condition are both met: the need to surrender oneself in full to the rest of nature, to become a part of it by laying down one’s whole existence to some higher meaning; and the need to expand oneself as an individual heroic personality.

Finally, religion alone gives hope, because it holds open the dimension of the unknown and the unknowable, the fantastic mystery of creation that the human mind cannot even begin to approach, the possibility of a multidimensionality of spheres of existence, of heavens and possible embodiments that make a mockery of earthly logic — and in doing so, it relieves the absurdity of earthly life, all the impossible limitations and frustrations of living matter. In religious terms, to “see God” is to die, because the creature is too small and finite to be able to bear the higher meanings of creation. Religion takes one’s very creatureliness, one’s insignificance, and makes it a condition of hope. Full transcendence of the human condition means limitless possibility unimaginable to us. [1]

Becker’s ideas are thoroughly grounded in the Freudian school, and Freud’s essential insight was that human actions, beliefs, desires and intentions are often motivated by hidden, subconscious forces which we are not fully aware of. In this case, the subconscious fear of death motivates us to embrace belief systems that allow us to symbolically transcend our own mortality.

One common trope I often hear about religion is that we simply came up with a bunch of fairy tales to cope with our existential fear of death, and that this explains religion.

But, as we’ve already seen, this is far from adequate in explaining the persistence and diversity of religious beliefs. As we saw, most ancient religions did not believe in a comfortable, cushy afterlife, and the tales of wandering spirits of the dead requiring constant appeasement do not provide much reassurance about what comes after death. If we just wanted to reassure ourselves in the face of our mortality, why didn’t we invent the “happy ending,” country-club afterlife straightaway? Why did such beliefs have to wait until after the Axial Age to emerge? And what about religions that believed in metempsychosis (transference of consciousness to a new body, i.e. reincarnation), rather than a comfortable afterlife?

Plus, this does not explain our beliefs in ghosts, spirits, and other invisible beings. Nor does it explain the extreme wastefulness and costliness of religion. The book itself says little about the origin and development of actual religion, and where it does, it deals exclusively with Western Judeo-Christian religions (the Christian existentialist Kierkegaard is especially cited).

Nevertheless, Terror Management Theory’s ideas have been empirically shown to have an effect on our belief systems and behavior. When knowledge of one’s own death has been subconsciously induced in test subjects (a technique called “priming”), people have been shown to be more clannish, more hostile to outsiders, more harsh to deviants, more likely to accept and dole out harsh punishments, and so forth (in short, more conservative). And, certainly the motivations for many strange behaviors—from the lust for power, to obsessive work and entrepreneurship, to desperate attempts to achieve lasting fame and stardom, to trying to create a “godlike” artificial intelligence, to beliefs about “uploading” one’s personal consciousness into computers, to scientific attempts to genetically “cure” aging and disease—can be seen as immortality projects motivated by a subconscious fear of death.

I would argue that a case can be made that the reason almost every culture known to man has believed that some sort of “life essence” survives the body after death stems from an existential fear of death similar to what Becker described. But the reason it took the forms that it did has more to do with some of the things we looked at last time–Theory of Mind, Hyperactive Agency Detection, the Intentional Stance, and so forth.

The noted anthropologist Bronislav Malinowski wrote an essay on the purpose of religion which in many ways echoes the ideas of Becker:

…in not a single one of its manifestations can religion be found without its firm roots in human emotion, which…grows out of desires and vicissitudes connected with life. Two affirmations, therefore, preside over every ritual act, every rule of conduct, and every belief. There is the affirmation of the existence of powers sympathetic to man, ready to help him on condition that he conforms to the traditional lore which teaches how to serve them, conjure them, and propitiate them. This is the belief in Providence, and this belief assists man in so far as it enhances his capacity to act and his readiness to organize for action, under conditions where he must face and with not only the ordinary forces of nature, but also chance, ill luck, and the mysterious, even inculculable designs of destiny.

The second belief is that beyond the brief span of natural life there is compensation in another existence. Through this belief man can act and calculate far beyond his own forces and limitations, looking forward to his work being continued by his successors in the conviction that, from the next world, he will still be able to watch and assist them. The sufferings and efforts, the injustices and inequalities of this life are thus made up for. Here again we find that the spiritual force of this belief not only integrated man’s own personality, but is indispensable for the cohesion of the social fabric. Especially in the form which this belief assumes in ancestor-worship and the communion with the dead do we perceive its moral and social influence.

In their deepest foundations, as well as in their final consequences, the two beliefs in Providence and Immortality are not independent of one another. In the higher religions man lives in order to be united to God. In simpler forms, the ancestors worshiped are often mystically identified with environmental forces, as in Totemism. At times, they are both ancestors and carriers of fertility, as the Kachina of the Pueblos. Or again the ancestor is worshiped as the divinity, or at last as a culture hero.

The unity of religion in substance, form and function is to be found everywhere. Religious development consists probably in the growing predominance of the ethical principle and in the increasing fusion of the two main factors of all belief, the sense of Providence and the faith in Immortality.

As we climb Maslow’s hierarchy of needs, we look for different things from our religions. Due to the “vicissitudes of life” ancient peoples often sought after more basic things related to material security: adequate rainfall, bountiful harvests, growing herds, protection form diseases, protection from raids, and so forth. They consulted spirits for decisions—whom to marry, when to go to war, how to bring back the rains, and so on. Today, with most of us living in societies where our basic material needs are met, we look for things like fulfillment, purpose, belonging and meaning using the same religious framework.

Religion as a Memeplex

The idea of memetics was first proposed by biologist Richard Dawkins in his 1976 book, The Selfish Gene. Dawkins made an explicit analogy between biological information (genes) which differentially reproduce and propagate themselves through time by using living organisms, and cultural information (memes), which live in human minds and reproduce via cultural imitation. A collection of related and reinforcing memes is called a memeplex (from the term, “coadapted meme complex”).

The underlying mechanisms behind genes (instructions for making proteins, stored in the cells of the body and passed on in reproduction), and memes (instructions for carrying out behavior, stored in brains, and passed on via imitation) were both very similar, Dawkins thought, and the ideas underlying Darwinism could apply to both. This is sometimes referred to as “Universal Darwinism”:

The creator of the concept and its denomination as a “meme” was Richard Dawkins. Other authors such as Edward O. Wilson and J. D. Lumsden previously proposed the concept of culturgen in order to designate something similar. At the present time the term of Dawkins has been imposed, although the theory of memes now includes contributions from many other authors. Therefore, talking of memes today is not simply the theories of memes of Dawkins.

Daniel Dennett, Memes and Religion: Reasons for the Historical Persistence of Religion. Guillermo Armengol (PDF)

The behavior of both memes and genes are based around three principle factors: variation, competition (or selection), and retention (or persistence):

For something to count as a replicator it must sustain the evolutionary algorithm based on variation, selection and retention (or heredity).

Memes certainly come with variation–stories are rarely told exactly the same way twice, no two buildings are absolutely identical, and every conversation is unique—and when memes are passed on, the copying is not always perfect….There is memetic selection – some memes grab the attention, are faithfully remembered and passed on to other people, while others fail to get copied at all. Then, when memes are passed on there is retention of some of the ideas of behaviours in that meme – something of the original meme must be retained for us to call it imitation or copying or learning by example. The meme therefore fits perfectly into Dawkins’ idea of a replicator and Dennett’s universal algorithm…

Where do new memes come from? They come about through variation and combination of old ones – either inside one person’s mind, or when memes are passed from person to person…The human mind is a rich source of variation. In our thinking we mix up ideas and turn them over to produce new combinations…Human creativity is a process of variation and recombination. [2]

Memetics is more of a theory about the evolution of religions that about their origins. Why do some ideas catch on while others die out? How and why do religions change over time? Memetics can provide an explanation.

One of my favorite definitions of “culture” is given by David Deutsch in his book The Beginnings of Infinity:

A culture is a set of ideas that cause their holders to behave alike in some ways. By ‘ideas’ I mean any information that can be stored in people’s brains and can affect their behavior. Thus the shared values of a nation, the ability to communicate in a particular language, the shared knowledge of an academic discipline and the appreciation of a given musical style are all, in this sense, ‘sets of ideas’ that define cultures…

The world’s major cultures – including nations, languages, philosophical and artistic movements, social traditions and religions – have been created incrementally over hundreds or even thousands of years. Most of the ideas that define them, including the inexplicit ones, have a long history of being passed from one person to another. That makes these ideas memes – ideas that are replicators. [3]

We see by this definition that it is difficult to distinguish religion from any other form of culture—they all cause their adopters to behave alike in certain ways, and adopt similar ideas. This has caused some scholars to question whether we can even define such a thing as “religion” apart from every other type of social behavior, or whether it’s simply an academic invention:

[Jonathan Zittell] Smith wanted to dislodge the assumption that the phenomenon of religion needs no definition. He showed that things appearing to us as religious says less about the ideas and practices themselves than it does about the framing concepts that we bring to their interpretation. Far from a universal phenomenon with a distinctive essence, the category of ‘religion’ emerges only through second-order acts of classification and comparison…

A vast number of traditions have existed over time that one could conceivably categorise as religions. But in order to decide one way or the other, an observer first has to formulate a definition according to which some traditions can be included and others excluded. As Smith wrote in the introduction to Imagining Religion: ‘while there is a staggering amount of data, of phenomena, of human experiences and expressions that might be characterised in one culture or another, by one criterion or another, as religious – there is no data for religion’. There might be evidence for various expressions of Hinduism, Judaism, Christianity, Islam and so forth. But these become ‘religions’ only through second-order, scholarly reflection. A scholar’s definition could even lead her to categorise some things as religions that are not conventionally thought of as such (Alcoholics Anonymous, for instance), while excluding others that are (certain strains of Buddhism).

Is religion a universal in human culture or an academic invention? (Aeon)

It used to be thought that ideas were passed down through the generations simply because they were beneficial to us as a species. But memetic theory challenges that. One important concept from memetics is that the memes that replicate most faithfully and most often are not necessarily beneficial—they are simply the ones most able to replicate themselves. For this reason, religion has often been called a “virus of the mind” by people attempting to apply the ideas of memetics to religion.

If a gene is in a genome at all, then, when suitable circumstances arise, it will definitely be expressed as an enzyme…and it will cause its characteristic effects. Nor can it be left behind if the rest of the genome is successfully replicated. But merely being present in the mind does not automatically get a meme expressed as behaviour: the meme has to compete for that privilege with other ideas – memes and non-memes, about all sorts of subjects – in the same mind. And merely being expressed as behavior does not automatically get the meme copied into a recipeient along with other memes: it has to compete for the reipients’ attention and acceptance with all sorts of behaviours by other people, and with the recipients’ own ideas. All that is in addition to the analogue of the type of selection that genes face, each meme competing with rival versions of itself across the population, perhaps by containing the knowledge for some useful function.

Memes are subject to all sorts of random and intentional variation in addition to all that selection, and so they evolve. So to this extent the same the same logic holds as for genes: memes are ‘selfish’. They do not necessarily evolve to benefit their holder, or their society – or, again, even themselves, except in the sense of replicating better than other memes. (Though now most other memes are their rivals, not just variants of themselves.) The successful meme variant is the one that changes the behaviour of its holders in such a way as to make itself best at displacing other memes from the population. This variant may well benefit its holders, or their culture, or the species as a whole. But if it harms them, it will spread anyway. Memes that harm society are a familiar phenomenon. You need only consider the harm done by adherents of political views, or religions, that you especially abhor. Societies have been destroyed because some of the memes that were best at spreading through the population were bad for a society. [4]

In this formulation, religions are seen as actually harmful, simply “using” us to replicate themselves for their own benefit, and to our own detriment, just like a virus. This is the stance taken by, for example, Dawkins and Dennett—both strident atheists. For them, it would be best if we could “disinfect” our minds and free ourselves from these pesky thought viruses.

Dawkins coined the term ‘viruses of the mind’ to apply to such memeplexes as religions and cults – which spread themselves through vast populations of people by using all kinds of clever copying tricks, and can have disastrous consequences for those infected…This theme has been taken up in popular books on memetics, such as Richard Brodie’s Viruses of the Mind and Aaron Lynch’s Thought Contagion, both of which provide many examples of how memes spread through society and both of which emphasize the more dangerous and pernicious kinds of memes. We can now see that the idea of a virus is applicable in all three worlds – of biology, of computer programs and of human minds. The reason is that all three systems involve replicators and we call particularly useless and self-serving replicators ‘viruses.’ [5]

Nevertheless, such “idea viruses” cannot inflict too much damage on their recipients, otherwise they will undermine their own viability:

The overarching selection pressure on memes is towards being faithfully replicated, But, within that, there is also pressure to do as little damage to the holder’s mind as possible, because that mind is what the human uses to be long-lived enough to be able to enact the meme’s behaviors as much as possible. This pushes memes in the direction of causing a finely tuned compulsion in the holder’s mind: ideally, this would be just the inability to refrain from enacting that particular meme (or memeplex). Thus, for example, long-lived religions typically cause fear of specific supernatural entities, but they do not cause general fearfulness or gullibility, because that would both harm the holders in general and make them more susceptible to rival memes. So the evolutionary pressure is for the psychological damage to be confined to a relatively narrow area of the recipients’ thinking, but to be deeply entrenched, so that the recipients find themselves facing a large emotional cost if they subsequently consider deviating from the meme’s prescribed behaviors. [6]

Blackmore herself, however, has retreated from this notion, citing all the apparently beneficial effects from adherence to various religions: more children, longer lifespans, a more positive outlook, and so on:

Are religions viruses of the mind? I would have replied with an unequivocal “yes” until a few days ago when some shocking data suggested I am wrong.

The idea is that religions, like viruses, are costly to those infected with them. They demand large amounts of money and time, impose health risks and make people believe things that are demonstrably false or contradictory. Like viruses, they contain instructions to “copy me”, and they succeed by using threats, promises and nasty meme tricks that not only make people accept them but also want to pass them on.

This was all in my mind when Michael Blume got up to speak on “The reproductive advantage of religion”. With graph after convincing graph he showed that all over the world and in many different ages, religious people have had far more children than nonreligious people…

All this suggests that religious memes are adaptive rather than viral from the point of view of human genes, but could they still be viral from our individual or societal point of view? Apparently not, given data suggesting that religious people are happier and possibly even healthier than secularists. And at the conference, Ryan McKay presented experimental data showing that religious people can be more generous, cheat less and co-operate more in games such as the prisoner’s dilemma, and that priming with religious concepts and belief in a “supernatural watcher” increase the effects.

So it seems I was wrong and the idea of religions as “viruses of the mind” may have had its day. Religions still provide a superb example of memeplexes at work, with different religions using their horrible threats, promises and tricks to out-compete other religions, and popular versions of religions outperforming the more subtle teachings of the mystical traditions. But unless we twist the concept of a “virus” to include something helpful and adaptive to its host as well as something harmful, it simply does not apply. Bacteria can be helpful as well as harmful; they can be symbiotic as well as parasitic, but somehow the phrase “bacterium of the mind” or “symbiont of the mind” doesn’t have quite the same ring.

Why I no longer believe religion is a virus of the mind (The Guardian)

I think memetics is a good way to describe cultural transmission, and I wish that it was used much more freely by sociologists, historians, anthropologists, economists, and other students of human behavior. Memes are a good way to describe how religions are transmitted, and why some religious ideas predominate over others. They provide a good description of how religious ideas evolve over time. But it does not provide much information about how and why religions got started in the first place.

Bicameral Mind Theory

Bicameral Mind Theory (BMT) was proposed by psychologist Julian Jaynes in his 1976 book, The Origin of Consciousness in the Breakdown of the Bicameral Mind (coincidentally, the same year as Dawkins and only three years after Becker).

Jaynes argued that what ancient peoples referred to as the “gods” were, in reality, aural hallucinations produced by their own mind. Such hallucinations stemmed from the partitioning of the human brain into two separate hemispheres (bicameral). Spoken language was produced primarily by the left hemisphere, while the right hemisphere was mostly silent. Jaynes noted from research on split-brain patients that if portions of the right hemisphere were electrically stimulated, subjects would tend to hallucinate voices.

This caused him to hypothesize that the thought patterns of ancient man were radically different than our own. In times of stress caused by decision-making, he argued, internal speech was perceived as something “alien” that was guiding and directing one’s actions from somewhere outside oneself.

One of his major pieces of evidence was a thorough study of ancient literature. Jaynes noted that ancient literature lacked a conception of the “self” or anything like a “soul” in living beings. Self-reflective and contemplative behavior simply did not exist. In addition, the gods are described as controlling people’s actions, and people frequently communicate directly with the gods. Most scholars simply took this communication as some sort of elaborate metaphor, but Jaynes was willing to take these descriptions seriously. Such depictions are very common in the Old Testament, for example. And he notes that in the Iliad—the oldest work of Western literature compiled from earlier oral traditions—the characters seem to have no volition whatsoever; they are merely “puppets” of the gods:

The gods are what we now call hallucinations. Usually they are only seen and heard by particular heroes they are speaking to. Sometimes they come in mists or out of the gray sea or a river, or from the sky, suggesting visual auras preceding them. But at other times, they simply occur. Usually they come as themselves, commonly as mere voices, but sometimes as other people closely related to the hero. [7]

The characters of the Iliad do not sit down and think out what to do. They have no conscious minds such as we have, and certainly no introspections. It is impossible for us with our subjectivity to appreciate what it was like…In fact, the gods take the place of consciousness. The beginnings of action are not in conscious plans, reasons, and motives; they are in the actions and speeches of gods. To another, a man seems to be the cause of his own behavior. But not to the man himself… [8]

In distinction to our own subjective conscious minds, we can call the mentality of the Myceneans a bicameral mind. Volition, planning, initiative is organized with no consciousness whatever and then ‘told’ to the individual in his familiar language, sometimes with the visual aura of a familiar friend or authority figure of ‘god’, or sometimes as a voice alone. The individual obeyed these hallucinated voices because he could not ‘see’ what to do by himself…[9]

The preposterous hypothesis we have come to…is that at one time human nature was split in two, an executive part called a god, and a follower part called a man. Neither part was conscious…[10]

The gods would reveal themselves to people in times of stress. We saw earlier that stress—even in modern people—often causes an eerie sense of a “felt presence” nearby:

If we are correct in assuming that schizophrenic hallucinations are similar to the guidances of gods in antiquity, then there should be some common physiological instigation in both instances. This, I suggest, is simply stress.

In normal people, as we have mentioned, the stress threshold for release is extremely high; most of us need to be over our heads in trouble before we would hear voices. But in psychosis-prone persons, the threshold is somewhat lower…This is caused, I think, by the buildup in the blood of breakdown products of stress-produced adrenalin which the individual is, for genetic reasons, unable to pass through the kidneys as fast as a normal person.

During the eras of the bicameral mind, we may suppose that the stress threshold for hallucinations was much, much lower than in either normal people or schizophrenics today. The only stress necessary was that which occurs when a change in behavior is necessary because of some novelty in a situation. Anything that could not be dealt with on the basis of habit, any conflict between work and fatigue, between attack and flight, any choice between whom to obey or what to do, anything that required any decision at all was sufficient to cause an auditory hallucination. [11]

Jaynes’ other line of evidence was physiological, and came from the structure of the human brain itself:

The evidence to support this hypothesis may be brought together as five observations: (1) that both hemispheres are able to understand language, while normally only the left can speak; (2) that there is some vestigial functioning of the right Wernicke’s area in a way similar to the voices of the gods; (3) that the two hemispheres under certain conditions are able to act almost as independent persons, their relationship corresponding to that of the man-god relationship of bicameral times; (4) that contemporary differences between the hemispheres in cognitive functions at least echo such differences of function between man and god as seen in the literature of bicameral man; and (5) that the brain is more capable of being organized by the environment than we have hitherto supposed, and therefore could have undergone such a change as from bicameral to conscious man mostly on the basis of learning and culture. [12]

It’s important to note that when Jaynes uses the term “consciousness”, he is using it in a very specific and deliberate way. He is not talking about the state of simply being awake, or being aware of one’s surroundings. Nor is he talking about reacting to stimulus, or having emotional reactions to events. Obviously, this applies to nearly all animals. Rather, he’s talking about something like “meta-consciousness”, or the ability to self-reflect when making decisions:

The background of Jaynes’ evolutionary account of the transition from bicamerality to the conscious mind is the claim that human consciousness arises from the power of language to make metaphors and analogies. Metaphors of “me” and analogous models of “I” allow consciousness to function through introspection and self-visualization. According to this view, consciousness is a conceptual, metaphor-generated inner world that parallels the actual world and is intimately bound with volition and decision. Homo sapiens, therefore, could not experience consciousness until he developed a language sophisticated enough to produce metaphors and analogical models.

Jaynes recognizes that consciousness itself is only a small part of mental activity and is not necessary for sensation or perception, for concept formation, for learning, thinking, or even reasoning. Thus, if major human actions and skills can function automatically and unconsciously, then it is conceivable that there were, at one time, human beings who did most of the things we do – speak, understand, perceive, solve problems – but who were without consciousness. [13]

Jaynes saw echoes of this bicameral mentality in psychological phenomena such as schizophrenia and hypnosis. Hypnosis, he argued, was a regression to a conscious state prior to that of the modern type which constantly narratizes our lived experience:

If one has a very definite biological notion of consciousness and that its origin is back in the evolution of mammalian nervous systems, I cannot see how the phenomenon of hypnosis can be understood at all, not one speck of it. But if we fully realize that consciousness is a culturally learned event, balanced over the suppressed vestiges of an earlier mentality, then we can see that consciousness, in part, can be culturally unlearned or arrested. Learned features, such as analog ‘I’, can under the proper cultural imperative be taken over by a different initiative works in conjunction with the other factors of the diminishing consciousness of the induction and trance is that in some way it engages a paradigm of an older mentality than subjective consciousness. [14]

…[W]hy is it that in our daily lives we cannot get above ourselves to authorize ourselves into being what we really wish to be? If under hypnosis we can be changed in identity and action, why not in and by ourselves so that behavior flows from decision with as absolute a connection, so that whatever in us it is that we refer to as will stands master and captain over action with as sovereign a hand as the operator over a subject?

The answer here is partly in the limitations of our learned consciousness in this present millennium. We need some vestige of the bicameral mind, our former method of control, to help us. With consciousness we have given up those simpler more absolute methods of control of behavior which characterized the bicameral mind. We live in a buzzing cloud of whys and wherefores, the purposes and reasonings of our narratizations, the many-routed adventures of our analog ‘I’s. And this constant spinning out of possibilities is precisely what is necessary to save us from behavior of too impulsive a sort. The analog ‘I’ and the metaphor ‘me’ are always resting at the confluence of many collective cognitive imperatives. We know too much to command ourselves very far. [15]

And schizophrenia, he argued, was a vestige of how the bicameral mind routinely worked, but was now only present in those with the genetic disposition for it, perhaps because of some quirk of neurotransmitter functioning or something similar:

Most of us spontaneously slip back into something approaching the actual bicameral mind at some part of our lives. For some of us, it is only a few episodes of thought deprivation or hearing voices. But for others of us, with overactive dopamine systems, or lacking an enzyme to easily break down the biochemical products of continued stress into excretable form, it is a much more harrowing experience – if it can be called an experience at all. We hear voices of impelling importance that criticize us and tell us what to do. At the same time, we seem to lose the boundaries of ourselves. Time crumbles. We behave without knowing it. Our mental space begins to vanish. We panic, and yet the panic is not happening to us. There is no us. It is not that we have nowhere to turn; we have nowhere. And in that nowhere, we are somehow automatons, unknowing what we do, being manipulated by others or by our voices in strange and frightening ways in a place we come to recognize as a hospital with a diagnosis we are told is schizophrenia. In reality, we have relapsed into the bicameral mind. [16]

It is the very central and unique place of these auditory hallucinations on the syndrome of many schizophrenics which it is important to consider. Why are they present? And why is “hearing voices” universal throughout all cultures, unless there is some usually suppressed structure of the brain which is activated in the stress of this illness? And why do these hallucinations of schizophrenics so often have a dramatic authority, particularly religious? I find that the only notion which provides even a working hypothesis about this matter is that of the bicameral mind, that the neurological structure responsible for these hallucinations is neurologically bound to substrates for religious feelings, and this is because the source of religion and of gods themselves is in the bicameral mind. [17]

Interestingly, modern research has revealed that anywhere from 5-15 of the population hears voices on occasion, and sometimes quite regularly. Most of these people are non-clinical—only about 1 percent of the population is considered to be schizophrenic. These percentages happen to approximate those in tribal societies who are considered to be able to perform as religious priests or shamans. In many tribal cultures, the ability to hear voices is considered to be a sign of being able to communicate with gods and spirits and move “between worlds” and thus highly desirable, rather than stigmatized. Indeed, many scholars of religion have seen clear links between symptoms of schizophrenia and so-called shamanic abilities.

Wither hallucinations?

Whether of not one fully accepts Jaynes’ hypothesis, I would argue that there’s one clear point he makes that has influenced beliefs in unseen spirits and survival of ancestors after death: the presence of hallucinations.

It turns out that hallucinating dead relatives is extremely common, even in rationalist Christian Western countries. If that’s the case, how much more common was this phenomenon in ancient times?

Up to six in ten grieving people have “seen” or “heard” their dead loved one, but many never mention it out of fear people will think they’re mentally ill. Among widowed people, 30 to 60 per cent have experienced things like seeing their dead spouse sitting in their old chair or hearing them call out their name, according to scientists.

The University of Milan researchers said there is a “very high prevalence” of these “post-bereavement hallucinatory experiences” (PBHEs) in those with no history of mental disorders. They came to their conclusions after looking at all previous peer-reviewed research carried out on the issue in the English language.

Jacqueline Hayes, an academic at the University of Roehampton, has studied the phenomenon, interviewing people from across the UK who have lost spouses, parents, children, siblings and friends. She told the Daily Mail: “People report visions, voices, tactile sensations, smells, and something that we call a sense of presence that is not necessarily related to any of the five senses.”

She added: “I found that these experiences could at times be healing and transformative, for example hearing your loved one apologise to you for something that happened – and at other times foreground the loss and grief in a painful way.”

Six in ten grieving people ‘see or hear dead loved ones’ (Telegraph)

Now, you might think that those are just hallucinations, and no one could seriously take this as a sign that their dead relatives were still alive. But, it’s important to remember that ancient peoples did not make the distinction between “real” and “not real” the way we do. To them, all phenomena which were experienced—whether in visions, trances, dreams, or “normal” waking consciousness—were treated as equally “real”. The stance we would take in modern times—that our subjective consciousness is not real, while at the same time there is an objective reality which is exclusively real—is not one which would have been operative in past pre-scientific cultures, especially pre-literate ones.

And, indeed, we can see that there are valid reasons for believing this to be so:

Let’s count the many ways that hallucinated voices are real:

– They are real neurological patterns that exist in real human brains.

– They are subjectively real. The listener actually hears them.

– They satisfy the criterion for reality put forward by David Deutsch in his book The Fabric of Reality: they kick back.

– They have metaphorical reality. We can reason about the voices the same way we talk about a movie with our friends (discussing the characters’ motivations, their moral worth, etc.).

– They have real intelligence — because (this is crucial) they’re the products of a bona fide intelligent process. They’re emanating from the same gray matter that we use to perceive the world, make plans, string words together into sentences, etc. The voices talk, say intelligent things, make observations that the hearer might not have noticed, and have personalities (stubborn, encouraging, nasty, etc.).

They are, above all, the kinds of things toward which we can take the intentional stance — treating them like agents with motivations, beliefs, and goals. They are things to be reasoned with, placated, ignored, or subverted, but not things whose existence is to be denied.

Accepting Deviant Minds (Melting Asphalt)

By this criteria, whether or not people really experienced gods as aural hallucinations at one point in time, it is quite likely that they did experience hallucinations which they would have regarded as legitimate and real. Thus, beliefs in disembodied souls would have been a product of actual, lived experience for the majority of people, rather than just an “irrational” belief.

[1] Ernest Becker, The Denial of Death, pp. 203-204

[2] Susan Blackmore, The Meme Machine, pp. 14-15

[3] David Deutsch, The Beginnings of Infinity, p. 369

[4] David Deutsch, The Beginnings of Infinity, pp. 378-379

[5] Susan Blackmore, The Meme Machine, p. 22

[6] David Deutsch, The Beginnings of Infinity, p. 384

[7] Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind, p. 74

[8] ibid., p. 72

[9] ibid., p. 75

[10] ibid., p. 84

[11] ibid., p. 84, p. 93

[12] ibid., p. 84, p. 106

[13] The “bicameral mind” 30 years on: a critical reappraisal of Julian Jaynes’ hypothesis, A.E. Cavanna, et. al. Functional Neurology, January 2007

[14] Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind, p. 84, p. 398

[15] ibid., p. 402

[16] ibid., p. 404

[17] ibid., p. 413