The Origin of the Factory 1

There has been an ongoing debate over what the industrial revolution was, and whether it required the introduction of fossil fuels.

On first glance, it may seem like, of course fossil fuels were required. How else could the vast amount of output of the industrial revolution be achieved?

Not so, say those who argue the opposite. The argue that the industrial revolution was well under way long before fossil fuels entered the picture.

Rather, the industrial revolution was really all about the reorganization, intensification, and mechanization of human labor, they say. During the industrial revolution, labor was transformed into something closer to what we know today—large amounts of people toiling under the same roof, each doing their own specialized part of the production process instead of individually crafting things from scratch. Deskilling, specialization, routinization, mechanization, and harsh worker “discipline”—these were all aspects of the Industrial Revolution’s reorganization of working life that did not require the introduction of fossil fuels.

Similarly the use of machines first kicked off with human and animal power, and then later moved to water and wind. It was only after centuries of this when steam engines replaced them and fossil fuels became the prime energy mover. By the time fossil-fuel-powered engines came along, the argument goes, they merely had to be plugged in to system that had already been put into place.

In large measure, they have the timeline right. The reorganization, routinization, deskilling and mechanization of labor did indeed precede the steam engine. Early factories were established before the large-scale harnessing of fossil fuels, and output did go up a great deal.

In this case, it’s helpful to distinguish, as some historians do, several industrial revolutions. How it’s divided up varies depending on the historian. But generally, a distinction is made between the birth of the factory system; the mass utilization of coal and steam engines; the use of gasoline and the internal combustion engine; and the harnessing of electricity and the application of scientific research and engineering methods to the creation of new products.

But the question remains, where did this reorganization of labor first originate?

The New World Path to Industrialization

It turns out that it originated in the New World. From there, it spread to the Europe.

That’s the contention in a book called Indian Givers by Jack Weatherford. Weatherford is best known for his series of books about Genghis Khan and the Mongols. But before he wrote those, his specialty was Native American history, and he wrote several books about the topic.

Chapter Two of Indian Givers is called “The American Indian Path Towards Industrialization.” At first glance, it might seem odd to talk about an “American Indian path to Industrialization,” as the American Indians were less, not more, technologically advanced then the European invaders (with some notable exceptions).

But it was a combination of circumstances caused by the exploitation of New World resources, as well as the discovery of new resources that played the critical role. As he puts it,

“Without European technology and organization, the industrial revolution would never have started in America; without American precious metals and methods of processing, the industrial revolution would never have happened in Europe.” p. 58

Weatherford uses as his illustration the village of Kahl in Germany. For a long time a small farming village, it industrialized during the great rolling industrialization of the nineteenth century. Today, alongside quaint village houses, fenced farm fields, and abandoned mills, sit factories, ports, railroads, truck depots, and even a nuclear power plant! What changed, and why did it change so rapidly in just the last couple of centuries?

Farm life in Kahl remained much the same regardless of whether the village was inhabited by Celts, the Chatten, the Romans, or the Franks. A peasant would probably have felt equally at home farming in the Kahl of 700 B.C. or A.D. 1700.

In this time the basic subsistence pattern of agriculture—the crops grown, the animals used, and the tools for growing and processing them—remained basically unaltered. The houses of the peasants from the two eras differed little, the peasants moved around by the same modes of transportation, and they ate roughly the same meals.

Suddenly in the last few centuries, life changed radically after millennia of great technological stability. The peasants stopped working in the fields and started to work in factories. They illuminated their houses and other buildings with electricity, and they replaced their horses with bicycles, tractors and trucks. They altered their diet, and the way they built their homes and educated their children. Within a few generations, virtually every aspect of life changed… IG pp. 40-41

Weatherford asks a question you hear often today: why did technologically advanced and sophisticated ancient cultures not break through to have their own industrial revolution? Why did it take (from our standpoint) so long? If we don’t require fossil fuel power or steam engines to have an industrial revolution, as we saw above, then what was stopping them?

After thousands of years of agricultural life, this sudden leap into the industrialized world seems difficult to explain. Why had the Greeks, with so much mathematical and philosophical learning and such outstanding architectural techniques, not been able to make and use machines? Why were the Romans, with all of their technical and practical knowledge of engineering and their vast array of engines of war, not industrialized? Why could the people of the Renaissance, who demonstrated their mechanical wizardry by making elaborate toys, not make the leap into machine production? What happened to the world in the 1700s and 1800s to make it industrialize after thousands of static years of technological stability? Indian Givers, p. 41

Weatherford’s answer is that the encounter with the New World was the catalyst. This was the missing piece from all earlier eras of world history.

Why was this the case? Well, this was simply based on the fact that the New World had a chronic shortage of labor. The Old World did not.

In fact, it’s highly unlikely that such a shortage would ever have transpired in the old civilizational centers of Eurasia. They also would have not been developed sui generis by people in the New World.

No, it took an invasion to do that. An invasion combined with a labor shortage.

In fact, the Old World had a surfeit of labor throughout most of history. That surfeit of labor would end up becoming the labor stock for the first industrial revolution.

But first, the factory system had to be invented.

Labor shortage

Unlike the Old World, the New World suffered from chronic and persistent shortage of labor.

There was a vast pool of resources just sitting around that needed processing, but the people who lived there kept inconveniently dying just when the exploiters needed them the most.

So, the conquerors were forced by necessity to come up with new ways of organizing labor due to the chronic shortage. Because of the labor shortage, Europeans in the New World developed all sorts of new and novel techniques to extract the maximum amount of labor out of their limited workforce—by and large enslaved Indians and Africans, with indentured workers (bonded labor) thrown into the mix.

In this case, necessity was the mother of invention. History shows that things typically aren’t invented until there’s some sort of pressing need for it.

You might say the New World products were the tinder, and the labor shortage was the spark.

The Americas in the sixteenth and seventeenth centuries promised vast resources—gold, silver, and furs, as well as the seemingly inexhaustible potential for crops of tobacco, sugarcane, rice, coffee, indigo, and hundreds of other plants. But a major obstacle constantly slowed the extraction of both the metals and agricultural treasures from the ground: the persistent shortage of labor.

The Spanish quickly impressed the Indians into slave labor, but in some areas, such as the Caribbean islands and Central America, the Indians died very quickly from diseases, malnutrition, overwork, or simple culture shock and grief. In other cases the natives lacked sufficient experience in agriculture or mining to be acculturated into the new Spanish system as workers.

No matter how many slaves the British and Dutch shipped to America, the plantation and mine owners demanded more laborers. Because of the lack of sufficient manpower, the Americans improvised whole new mechanical technologies to help tap the natural resources and potential wealth. IG, p. 49

Back in the days of cottage industries, artisanal production, and small, individual workshops, it was unlikely that anyone would have seen a need to scale up their operations very much, or dramatically increase their output, as there would not have been enough customers to buy their products even if they were to do so.

With the introduction of vast new amounts of silver into the Old World, and the marketization of society it engendered, suddenly not only was there a huge amount of new products, but there was also the money with which to buy them! This catalyzed a self-sustaining feedback loop—more money demanded more products, which made more money, which was invested into higher output of products. The New World produced not only the products, but also the money with which to buy the products. Supply really did create its own demand:

At the time of the discovery of the Americas, Europe had only about $200 million worth of gold and silver, approximately $2 per person. By 1600 the supply of precious metals had increased approximately eightfold. The Mexican mint alone coined $2 billion worth of silver prices of eight.

The silver coins flowing through Europe at first promised to strengthen the feudal order, but in the end they forged whole new classes and changed the fortunes of many countries. The new coins helped to wash away the old aristocratic order in which money games could be played only by the privileged few; massively larger amounts of money opened up new games to new people.

Even though all the gold and silver went into Spain, it did not stay there. From Spain the money spread throughout Europe. The Hapsburg monarch Charles V occupied his throne both as emperor of the Holy Roman Empire and as the king of Spain; this facilitated the spread of the money from Spain to the Hapsburg holdings in the Spanish Netherlands and across Germany, Switzerland, Austria, and the Italian states. Three-fifths of the bullion entering Spain from America immediately left Spain to pay debts, mostly those incurred by the profligate monarchy…

Precious metals from the Americas superseded land as the basis for wealth, power and prestige. For the first time there was enough of some commodity other than land to provide a greater and more consistent standard by which wealth might be measured. This easily transported and easily used means of wealth prepared the way for the new merchant and capitalist class that would soon dominate the whole world…

The tremendous volume of new currency influenced the economy of all Europe. For example, in Naples there wee only 700,000 ducats in circulation and in savings in 1570. In less than two centuries, by 1751, there were eighteen million ducats. These eighteen million ducts, moreover, could be used many times in a year for various types of transactions. The total number of ducats used in buying and selling would be approximately 288 million.

Similarly, in France, which received its wealth from the New World much later than Spain, approximately 120 million francs circulated in 1670, but by 1770 there were two billion in circulation, a fifteenfold increase in a century. IG, pp. 14-16

Not only did they ship over products from the New World like cotton, tobacco and coffee, but they also shipped over the means to buy those same products in the form of silver and gold bullion.

There was tremendous bounty in the New World, it was true, yet New World goods needed a lot of work to process, that is, they were labor intensive. Where would the labor to do all this work come from?

Slavery, obviously. But even slaves can only do so much. Plus, they’re expensive. You’ve got to ship them over from Africa, buy them, feed and clothe them, and so forth. The same is true for bonded labor. Plus, unlike artisans, you’re not dealing with skilled labor here. You’re dealing with simply warm bodies, ripped from their culture and with little to no training in what they are about to do.

So, to minimize the number of slaves you needed to buy (and the bonded labor you need to indenture), and to dumb down the process enough so that they could easily be put to work, you need to maximize labor output and simplify your tasks. And thus the idea of capitalist efficiency was born—maximizing productive output with the absolute minimum of labor input.

In order to do that, you need to reorganize labor and use machines to the greatest extent possible. This is why this process began in the Americas, rather than in the Old World. In particular, it began in the sugar mills of the Americas—large ,extensive operations designed to grow and harvest sugarcane and turn it into the raw material to be shipped back to Europe and sold for profit.

Even though most of the labor was unskilled, you still needed a small class of skilled laborers for supervision, as well as to direct the more precise industrial processes. So you end up with a balance of about 90 percent unskilled labor versus 10 percent skilled labor in the sugar mills, very similar to that in the earliest factories.

And another key ingredient was the impetus to mass production that New World crops gave to European markets. All sort of new products were discovered during the Columbian exchange-tobacco, chocolate, cotton, dyes, rubber, sisal, etc. These were combined with products already known to Europeans but that Europeans were unable to produce themselves because they required a tropical clime—things like like coffee, sugar and cotton, especially.

While cotton was previously known to Europeans, it turns out that cotton was domesticated independently by natives in the New World, and their cotton was of much higher quality than that in the Old World. The strands were longer, smoother, and silkier; so silky, in fact, that Europeans mistook some Indian cotton garments for silk! Cotton cloaks were used as a high-denomination currency in Mesoamerica, alongside lower-denomination cacao beans.

Some Old World types of cotton had been grown in India and the Near East for centuries, but only very small quantities of it ever reached Europe. This cotton was not only expensive, but weak and difficult to weave because of its short strands.

Asiatic cottons, Gossypium herbaceum and G. arboreum, had a strand length of only about half an inch, but American upland cotton, G. hirtutum, usually grew a full inch or more. Meanshile, G. barbodense, the tropical American cotton that became best known as Sea Island cotton (from the plantations that grew it on the coast of South Carolina and Georgia), could grow to two and a half inches.

In Europe the short strands of the Old World cotton served primarily for padding jerkins under the coats of mail worn in battle. In time the uses of cotton expanded to the making of fustian, which was a coarse material built on a warp of stronger flax and a woof of Old World cotton. Not until American cotton arrived in England, however, did the phrase “cotton cloth” appear in English; the Oxford English Dictionary’s earliest date for it is 1552.

The long-strand cotton of the American Indians so surpassed in quality the puny cotton of the Old World that the Spaniards mistook the American cloth for silk and interpreted its abundance as yet further proof that these new lands lay close to China.

For thousands of years before the European conquest of America the Indians had been using this carefully developed cotton to weave some of the finest textiles in the world. Many remnants of these early cloths survive to the present day, their colors and designs intact, after several thousand years in the desert burials of Peru, Bolivia, and Chile. IG, pp. 42-42

I’m not going to go deep into the history of New World products or foods. Instead, I’d like to take a look at how the labor shortage forced the Europeans to automate many of the processes they used with enslaved New World labor, and how this led to the development of the factory system in Europe. In particular, we’ll look at two critical New World industries: mining and sugar production.

Job Quantity versus Quality

A few posts ago, I put forward the idea that a decline in the workforce population may be what’s behind today’s low headline unemployment rate.

Supporters of Trump tout the low unemployment rate as if his policies had something to do with it, rather than demographics. Or they may tout decreased immigration, which actually started before he was elected, due to the shrinking disparity in economic conditions between the U.S. and Mexico (Mexico getting better, U.S. getting worse).

It’s an enduring paradox: on the one hand, politicians consistently claim that they cannot “interfere” in the “natural” workings of the free market economy when times are hard—they can only sit by and watch helplessly as it does it’s thing and wait it out. But when things are going well while they are in office, they take all the credit for it and insist it was 100% percent their policies that created the current conditions, and not the the fact that the market is cyclical.

In other words, it’s mainly down to luck.

But a decline in the unemployment rate does not tell the whole story.

There are two other important factors to consider here. One is the workforce participation rate. The unemployment rate might be lower because less and less people are counted as being actively in the labor market. That is, they become “unpersons.”

The other is the quality of the jobs created. We could create hundreds of thousands of new jobs as human footstools that pay sub-starvation wages and call it a great success because the unemployment rate went down.

Until now, all I’ve ever heard is the headline number of jobs created and the unemployment rate. But what I never heard in any of those discussions in the media was what kinds of jobs were created. Full or part time? Well-paid or not? Benefits? Are we just creating more bartender jobs while middle class jobs continue to disappear?

To the extent commentators did dive into the stats, in fact, the scenario above was in evidence: more and more jobs in low-wage service and retail, less and less in professional occupations across the board. What use is it in creating more and more dead-end McJobs that people can’t afford to live on?

This explains why even though the headline unemployment rate is low, there is tepid wage growth (which we covered last time). It also explains why so many people are struggling and unhappy with their circumstances even while hearing rosy statistics constantly blasted at them from the corporate media. It simply doesn’t comport with their lived reality. Under the current system, if you get even $1.00 a week from a “job” you’re considered “employed” and removed from the unemployment statistics. Hooray!

In fact, all the evidence indicated that the number of “good” jobs continues to shrink, which means that the competition for them is as fierce as ever, despite the overall growth rate in job numbers. “Opportunity hoarding” is still endemic among the elite upper classes in the U.S. And all of this may be due to structural factors in the U.S. economy, which are only going to get much worse in the years ahead.

1. Workforce participation rate.

You probably already know the trend here. Since the 1960’s the trend has been for ever-less men to be in the workforce. Meanwhile the number of women steadily increased from the late 1960s onward.

The number of women entering the workforce offset the men leaving it, such that the overall workforce participation rate across the entire population grew over time. As sort of Great Displacement, as it were.

U.S. Labor Force Participation Rates
Click to enlarge

Then it started reversing after about 2000 or so. We hit peak employment, and rates across the board started to fall for both genders after 2008. There was also a trend of older workers continuing to work, likely because of the switch from guaranteed pensions to the 401K scam where you provide for your own retirement out of your shrinking discretionary income.

Source. Click to enlarge.

The latest data has the workforce participation rate hovering around where it has bottomed out around 2012, with no signs of recovery. In fact, it is projected by the BLS to fall even further in the years ahead.

Source. Click to enlarge.

The bottom line is that there are historically less people working than in recent times, which skews the statistics, although not by a tremendous amount. The workforce participation rate is expected to shrink: more people will be permanently excluded from the musical chairs game going forward, especially men.

2. Job Quality

At long last, there is finally a discussion not just on the overall number of jobs, but on what type of jobs are being created.

A “Job Quality Index” has been created by Gallup, et. al. to go along with the unemployment rate. And what it has found is that the vast, vast, majority of jobs in the United States are fucking horrible.

That comports with what I have seen. While I have observed an enormous amount of “help wanted” signs of late, all of them are in establishments like fast-food franchises, restaurants, hotels, dry cleaners, mechanics yards, bus drivers, and so forth. These were disproportionately the jobs that were taken by illegal immigrants in the past several decades. Jobs that actually allow you to get ahead and have decent benefits are just as hard to procure as ever. This distorts the unemployment rate.

At the same time, the costs for the very basics of life—so called “nondiscretionary spending”—continue to skyrocket. Costs for housing, especially where jobs are plentiful, are escalating beyond people’s ability to pay for them, leading to an increase in the number of homeless people with jobs. And education is more and more out of reach for the average person, having risen 19 times faster than average family incomes since 1980. Access to many professional occupations is now only available to the fortunate few who chose their parents correctly. So much for social mobility and the “American Dream.”

What this has led to is widespread and dire poverty, even in the face of low unemployment numbers. This is why so many people react with incredulity, and even anger, when the rosy statistics are thrown in their face to argue that things have never been better, and if you’re struggling it’s only down to your own fault.

And a paper by the Brooking Institutions recently found that nearly half of Americans have poverty-level wages:

According to a Brookings Institution analysis, unemployment may be down, but there aren’t enough good jobs to go around.

They say 44% of American workers are employed in low-wage jobs that pay median annual wages of $18,000.

The report says their median hourly wages are $10.22. That’s higher than the federal minimum wage which sits at $7.25. The minimum wage in Louisiana is also $7.25.

That’s nearly half of the American workforce who don’t make what’s considered a living wage. According to MIT, a living wage for a single person in Louisiana is $11.28. The poverty wage for a single adult with two children is $9.99.

This isn’t just a problem for workers who are young or inexperienced, according to the report. The low-wage workforce is primarily made up of post-college age adults and older Americans.

56% of them are ages 25-50. 19% of them are ages 51-65.

23% of low-wage workers have an associate’s degree or more. Add in the number of workers who are in school or have some college education and that number jumps to 48%.

Job Quality Index data appears to back up the analysis. It assesses job quality in the United States and measures the “direction and degree of change in high-to-low job composition.”

While the JQI chart shows increases and declines in job quality since its inception in 1990, the trend has generally been a downward one.

According to the index, job quality has declined by 14.3% since 1990. The index most recently began to trend upward in 2012 but started to drop again in 2017. You can see it here.

Most workers appear to feel it. A CBS report in October said 6 in 10 workers rate their job quality as “mediocre to bad.”

Report: Nearly half of American workers have low-wage jobs (KNOE)

A study conducted by the Brookings Institute found that 53 million Americans between the ages of 18 and 64 (or 44 percent of the workforce) yearly earn a median average of $18,000 (or $10.22 per hour). What this means is that a large section of our society can’t afford even small mistakes, let alone major emergencies. It only takes one bad move or shock for a low-wage worker to be irrevocably thrown into a catastrophe. CBS’s post about the Brookings report appeared the day after it aired the 60 Minutes episode on Seattle’s homeless crisis.

What 60 Minutes Missed: 44 Percent of U.S. Workers Earn $18,000 Per Year (The Stranger)

…the working class can’t thrive on low unemployment rates alone. For the median job-seeker in Trump’s America, the odds may be good, but the good jobs are an oddity. Amid all the encouraging signs in Friday’s jobs report, wage growth remained bizarrely tepid. In a labor market as tight as this one, conventional economic models would predict a bidding war between understaffed employers, and thus, accelerating wage growth. And yet, even as the unemployment rate has fallen in 2019, the pace of wage gains has actually slowed.

A lot of factors have contributed to this textbook-defying state of affairs. But an excellent new Washington Post feature on the disappearance of administrative assistant jobs illustrates some of the most essential ones.

Contrary to Andrew Yang’s grim prophecies, automation is not rendering vast swathes of the public unemployable. What technology and trade have done, however, is displace millions of Americans from their middle-class jobs, and send them hurtling down the income ladder into less remunerative occupations. And while some dimensions of this development have inspired widespread political attention (if not meaningful political action), others have gone all but unmentioned.

The plight of the downwardly mobile manufacturing worker is familiar to most Americans. But that of the displaced administrative assistant is less so. And yet, they are two sides of the same story: Since 2000, the U.S. economy has shed 2.9 million jobs in (disproportionately male) production occupations, and 2.1 million in (disproportionately female) administrative and office-support roles.

As the Post notes, such clerical jobs have been for non-college-educated women what manufacturing employment once was for non-college-educated men — a route to lifelong economic security. But as administrative assistant roles have been off-shored, automated, or — at the C-suite level — refashioned into elite positions staffed by lawyers and MBAs, that path to prosperity has been foreclosed to millions of working-class women. For middle-aged workers who had built careers in the field, this has meant sudden and harrowing downward mobility.

According to the Urban Institute, more than half of all workers over 50 in the U.S. eventually lose their jobs involuntarily, and 90 percent of those workers get consigned to lower-paying work for the rest of their careers. Meanwhile, for the typical millennial non-college-educated worker, our ascendant “barbell economy” (which concentrates job growth at the top and bottom of the income ladder) doesn’t even provide ephemeral opportunities for middle-class employment.

Critically, this is not because our society has no use for blue and pink-collar labor: Of the ten occupations expected to add the most jobs to the U.S. economy over the next decade, six are “low-skill” roles that pay less than $27,000 a year. This trend would be alarming enough in a world where America didn’t have an extortion racket for a health-care system, runaway inflation in its higher education sector, and housing markets beset by artificial scarcity. In the world we actually live in, the collapse of middle-income job opportunities has coincided with a meteoric rise in the costs of middle-class life…

Gallup’s headline finding is that, as measured by its index, only 40 percent of Americans currently have “good” jobs. But a more telling (and less ambiguous) finding from its survey may be this: While 59 percent of U.S. workers say their wages have increased over the past five years, no more than 37 percent say any other key marker of job quality has improved over that period. In fact, roughly as many workers say their job’s non-cash benefits have gotten worse in recent years (21 percent) as say they’ve gotten better (23 percent). Meanwhile, majorities report no gains in their job’s sense of purpose, enjoyability, or the stability and predictability of its wages — all factors that respondents rated as being more important to job quality than overall pay…

Jobs, Jobs Everywhere, But Most of Them Kind of Suck (NYmag)

…Right now the JQI is just shy of 81, which implies that there are 81 high-quality jobs for every 100 low-quality ones. While that’s a slight improvement from early 2012—the JQI’s 30-year nadir—it’s still way down from 2006, the eve of the housing market crash, when the economy regularly supported about 90 good jobs per 100 lousy ones.

Or, in plainer English, the US labor market is nowhere near fully recovered from the Great Recession. In fact, the long-term trend in the balance of jobs paints a more ominous picture.

“The problem is that quality of the stock of jobs on offer has been deteriorating for the last 30 years,” says Dan Alpert, an investment banker and Cornell Law School professor who helped create the index. (Along with Alpert, the index is built and maintained by researchers at Cornell University Law School, the Coalition for a Prosperous America, the University of Missouri-Kansas City, and the Global Institute for Sustainable Prosperity.) The “whole story” told by the index, he adds, is “the devaluation of American labor.”

The prevalence of low-quality jobs suggests that there’s still a lot of slack in the labor market—meaning, people could be working more or using their skills more fully than they currently are. This is pretty much the opposite conclusion you’d draw from the ultra-low unemployment rate, robust job creation numbers, and other conventional headline data.

The great American labor paradox: Plentiful jobs, most of them bad (Quartz)


The conclusion to be drawn is that the unemployment rate has very little—if it ever did—to deal with what the labor market is actually like for Americans. In order for it to get better, it will take a much more activist approach, rather than simply letting the market decide, or even actively suppressing labor movements (as has been the case historically). So don’t let the statistics blindside you to what’s really going on, because I suspect that’s going to be a major effort by politicians and the media over the next year. Don’t be fooled!

The Nature of Unemployment

While doing some research on History Stack Exchange, I came across this answer concerning unemployment, and I thought it was relevant to share here:

There is no such thing as a “labor shortage”. “Labor Shortage” is just a propaganda term used by employers who are trying to find some excuse to pay less.

For example, many manufacturers complain that there is a “shortage” of machinists. What this means is that they would like to pay machinists $10 an hour and, surprise, surprise, no machinist wants to work for $10 an hour. If the manufacturer offered $100 an hour, he would have machinists coming out of his ears. He would have machinists lining up at his front door wanting to work for him. He would have machinists flying from all over the world to work at his factory.
Likewise, employees use the same political language. They say there is a “job shortage”. Of course, there is no job shortage. If you are willing to work for $5 an hour you will find hundreds of employers willing to hire you. In fact, for $5 an hour * I * will hire you.

There is no such thing “labor shortages” and “job shortages”. They are just made up terms used by people for political purposes.

That’s an interesting take on it. Of course, there might be a true labor shortage if there literally weren’t enough people to keep society running, in which case we would have zero unemployment. This would be an “all hands on deck” situation where everyone’s labor is required simply to survive.

Another phrase you’ll often hear is that we need to import people to the jobs “Americans won’t do.” Well, they would do them if you paid them well enough. The fact is, this is just an excuse by employers to keep wages low.

Another exchange clarifies the point:

I think he meant labour shortage in the sense that there were actually more available jobs than workers. – Saal Hardali May 27 ’14 at 17:49

You don’t seem to get it. There are an infinite “number of jobs”. If you and your friends agree to work for me for 25 cents an hour I will hire you all, even if you have a thousand friends. I just “created” 1000 jobs instantly. You seem to have the erroneous idea that “jobs” are some kind of fixed commodity. A “job” is just some guy willing to pay some other guy to do something. – Tyler Durden May 27 ’14 at 18:08

Another question asks whether there was unemployment in ancient Rome. The commenter answers that what we think of as “unemployment” was not relevant to earlier pre-capitalist societies::

The modern definition of unemployed is “having looked for work recently”. I’m not entirely sure that definition is appropriate for Rome. Modern Western Liberal Democracy is organized around the notion that “companies” provide employment, and that people seek employment. Unemployment results in a dramatic decline in economic and social status.

Although there were workshops in Rome, and there were teams that organized to perform tasks that no individual could, I’m not aware of anything that resembles the modern limited liability corporation. Roman politics and economics were based more on relationships than on companies. Romans belonged to a family, and to a tribe, and usually to some kind of patron/client relationship. Depending on their social class, they may have also belonged to one or more social organizations (e.g. burial society). If someone wanted to work, they would rely on these connections to find them employment. “Unemployment” didn’t really result in the kind of economic and social decline we find today because these social bonds provided a safety net. If for some reason you were isolated from your social network, that might be a definition similar to “unemployed”, but there were mechanisms (adoption, social organizations, etc.) that made the social networks fairly resilient.

There is no unemployment in foraging societies either, for example. Unemployment presumes a market society where we must earn wages or some sort of other income to survive. The wealthy rely on passive income from investments; the rest of us have to sell our labor. And despite the insistence of the “financial independence” crowd (Mr. Money Mustache, et. al.), is is not possible for all members of society to live off of passive income. It also assumes that we have enough surplus in the first place to invest in passive-income generating schemes, which most of us do not by design. We can’t all be six-figure software engineers (in which case, software engineers would not worth very much).

What brought this to mind was this discussion with Warren Mosler over MMT’s counter-intuitive idea that unemployment is actually created by governments, and the unemployment rate is always a policy choice. In other words, it is not like a hurricane or a fire that is out of our control. It is a human-created phenomenon. Here is Mosler explaining some important aspects of the job market:

[15:41] “The labor market is not a fair game, with or without unemployment, because people have to work to eat. Business only hires if they can make a certain return on equity that they think is a good deal, otherwise they don’t hire. Nothing bad happens to them if they don’t hire. And so mainstream, elementary game theory will tell you this is not a fair game, and so you’re going to expect real wages to gravitate towards subsistence levels, whatever that is, unless there’s some support for labor.”

“Now, we used to have a lot of support for labor with labor unions, and then in the 1980’s they were all broken, and that’s when the wedge started being driven between productivity and labor, where the productivity kept going up, but everything started going to profits instead of wages. It’s pretty clear in the data that that’s what happened, because labor lost its support…The principle is that real wages are going to stagnate near subsistence unless you do something to support them because its not a fair game.

“You could have unemployment down to everybody’s got a job except this one last guy who’s unemployed, and then he sees a job offer. Well, he’s got two choices—take the job or his family can’t eat. It’s not like all these other people have jobs so I can demand more money. You look at his position—he’s not in any better position because everybody else has a job. So it’s absolutely not a fair game.

So it shouldn’t be any surprise that wages aren’t going anywhere, even with unemployment coming down. The Philips curve doesn’t seem to be working the way some have suggested…which says as that as you get towards lower and lower unemployment wages will go up and up. It’s just not happening, and that’s why.”

This seems to be lost on nearly all economists, who are stumped by the fact that wages aren’t rising, even in the face of “full” employment. But why would they?

The economy may be booming, but nearly half of Americans can’t make ends meet (L.A. Times)

The fall in those wages has alarmed some economists, who say paychecks should be getting fatter at a time when unemployment is low and businesses are hiring. “This is odd and remarkable,” said Steven Kyle, an economist at Cornell University. “You would not normally see this kind of thing unless there were some kind of external shock, like a bad hurricane season, but we haven’t had that.”

The falling wages promise to exacerbate historic levels of U.S. inequality. Within the labor force, it means workers who were already making less are falling further behind. And if private laborers as a whole are seeing their earnings flatten while the economy as a whole grows at an annual rate of more than 2 percent, that means the gains are going almost exclusively to people already at the top of the economic ladder, economists say

For the biggest group of American workers, wages aren’t just flat. They’re falling. (Washington Post)

Then Mosler explains the basics about why unemployment is an artificial creation of the state, rather than a “natural” phenomenon. In doing so, he explains the origin of money, and what the public debt really is—the money supply:

[26:20] “The other critically important contribution of Modern Monetary Theory is that the cause of unemployment, by design, is taxation…The monetary circuit theory begins with businesses borrowing money to hire people. MMT says, no, the money story begins before then. Why is anybody working for this money to begin with? Where does the money story start? It starts with a government trying to provision itself.

“We show examples like when the British colonized Ghana. They wanted to grow coffee there. How do you do it? What you have is a state—the British at the time—that wanted to move resources, which was human labor, from whatever these people were doing in Ghana before they got there in a non-monetary society. They wanted to get them into the coffee fields…”

“What the British did was they implemented a tax. They started a new currency—let’s call it the crown—and so they offered crowns for people who wanted to come work in the coffee field. But nobody wanted to work for crowns, nobody had ever heard of it; there’s no reason to give up hunting and fishing and taking care of the children to go down to the coffee fields and pick coffee for this crown thing…So what they did was, they implemented a hut tax. They all lived in these grass huts. They said there’s going be a ten crown a month tax on every hut, and if the hut doesn’t pay its tax, we’re going to go burn it down. They had the military.”

“So now they’ve created this tax liability – everybody has to pay 10 crowns a month or get your house burned down. And so now they’ve created something which didn’t exist before-that’s people looking for paid work in a currency, which is what we call unemployment. Unemployment is not people looking for volunteer work for the American Cancer Society who can’t get a job — it’s people who need money. And it’s created by taxation. In a non-monetary society where you don’t have monetary taxation, there’s no unemployment as we know it. There can’t be. It’s just not applicable.”

And so the taxation, by design, is there to create people who need the currency for the further purpose of provisioning the government — in this case, provisioning the British with labor in their coffee fields. They would then have all these people showing up looking for work in the coffee fields, and now they could hire them and pay in crowns because the people need it to pay the tax. They understood that they had to get paid first before they could pay the tax. The British would spend first and then collect the tax. And, of course, the British would always spend more than they collected, because people would earn the money first, and they would earn more than was needed to pay the tax normally…”

So the British always ran a deficit; they always spent more than they collected, and those extra crowns they spent were called the money supply. Those were the crowns people had in their pockets or in their homes, or the merchants would have them in their shops, and they all knew that the British would spend more than they collected, and that’s where the money supply came from, and that spending more than you collect is called deficit spending, and the amount that you spend that more than you collect is called the public debt, and they all knew the public debt was the money supply, as they casually called it back then. Today we’ve lost sight of that.”

We have twenty trillion of public debt; what is it? It’s not wrong to define it as the money supply for the economy. If the government has spent twenty trillion more than they’ve taxed, and those dollars sit out there in bank accounts, until they get used to pay taxes. So the twenty trillion are the dollars spent by the government that haven’t yet been used to pay taxes. That’s our public debt, and that’s all it is.

The money story starts with the government trying to provision itself, which is very different form the mainstream story.

I did some research into the British efforts in Ghana. What Mosler stated above is confirmed on the BBC’s own Web site:

One of the central pillars of colonisation was tax. The European powers did not want Africa to be a drain on their treasuries, and they wanted the colonies to pay their own way. They also wanted people to enter into the cash economy. Taxation was a way of driving people into working for money.

The competence of a French colonial official might often be measured by how much tax he was able to collect. This could be in the form of a poll tax or a tax on homes. For the ordinary people, especially those who were not earning money through labour or selling goods, taxation was an intolerable burden. Resentment turned to anger in many parts of Africa…

The Story of Africa (BBC)

Here is some more detail about the British and French colonial activities in West Africa: 8: Colonial Rule in West Africa (WASSCE History Textbook)

So the libertarians are partly right where they say that taxes are imposed by state violence. Where they fall down is the fact that these were effectively unmonetized traditional societies before state violence, and not barter economies like Adam Smith imagined. International markets destroyed these societies; they were not an integral part of them, except as supplemental places of surplus exchange. It was the violence itself that created the markets of which libertarians are so deliriously enamored!

They were also created by law. The competitive labor market in Britain was established in 1834, before which, industrial capitalism as a social system did not exist. What Britain did internally it later did externally.

Colonialism and hut taxes destroyed the traditional provisioning methods of these societies. Rather than the traditional subsistence agriculture which had always fed the people, the colonizers wanted farmers to grow cash crops for export. This meant that food producers were now dependent upon the money gained from selling that crop to buy whatever else they needed, including food. And the amount of money they gained from that crop was entirely determined by distant global commodity markets over which they had no control.

Sometimes people wonder how it is that a country like Côte d’Ivoire can produce a nonedible crop such as cocoa but at the same time have difficulty feeding its people. The answer to this question involves two factors: colonialism and the introduction of cash crops.

The European colonial powers in sub-Saharan Africa, most notably France and Britain, expanded indigenous agriculture to include cash crops geared to the wants of European consumers and industries. The production of these cash crops for export depended upon plantation and sharecropper systems. Britain organized the commercial production of cocoa in Ghana, wool and coffee in Kenya, and tea in Zimbabwe. France arranged the production of peanuts in Senegal and Mali, and cocoa and bananas in Côte d’Ivoire.

The economic shift toward cash crops significantly decreased the production of goods for local food needs and at the same time destroyed indigenous handicraft industries.

The kingdom of Bugunda, in what has been Uganda since 1894, provides a good example. Once Bugundan peasants were forced by British colonists to produce cotton and coffee for export, the local barkcloth and pottery crafts disappeared. Likewise, people began working in mines, fields, and plantations for export production in order to pay household or hut taxes and retain access to land. As a consequence, indigenous societies lost cultural knowledge and agricultural production for local food needs declined. Indigenous labor and resources were used to sustain and develop European urban and industrial needs.

Child Labor in Sub-Saharan Africa, by Loretta Elizabeth Bass p. 32

So the “staving African” and “starving Indian” stereotypes were a product of the creation of Market society, and not natural features of their daily existence, and certainly not due to “low IQ” as the Alt-Right race fetishists would have it. The “poverty line” used by international agencies is exclusively determined by the cash income earned by people in these post-colonial economies. The fact that there was no poverty line before colonialism is always ignored. Today, the raising of the poverty line in developing countries is used as the primary defense of Neoliberalism by its supporters.

Capitalist logic defines poverty in strict monetary terms, such as the poverty line, in which people don’t possess enough money to purchase basic goods from the market. In this way, capitalism not only makes people dependent on the market for survival, but also does not acknowledge alternative factors which contribute to human wellbeing, such as health, education, affection, the environment, or life in the community.

The Obsolescence of Capitalism (Medium)

Later, marketing boards were introduced to deal with the random price fluctuations that were the cause of so much misery:

The extraction of the agricultural surplus for urban/industrial development has been a common objective of both colonial and postindependence rule. Colonial authorities imposed various taxes (head taxes, hut taxes) and compulsory planting of selecting export crops to stimulate the production of export crops and to capture the agricultural surplus. Shortly after World War II, the British colonial government introduced marketing boards in their East and West African colonies following the relatively successful record of marketing boards in Australia and New Zealand since the 1930s. The objective of the marketing boards was to stabilize producer prices and foreign exchange earnings and to reduce interseasonal price movements.

In the 1960s and the 1970s, numerous African governments introduced grain boards to control producer prices and food grains and to channel food to the urban centers. Boards usually accumulate and carry stocks to mitigate both intra- and inter-annucal flucturations in price and supply, and develop distribution systems to facilitate the transfer of grain from surplus to deficit regions.

A Survey of Agricultural Economics Literature Volume 4; edited by Lee R. Martin pp. 44-45

A lot of MMT’s recommendations are about creating a “marketing board” for jobs—a “job board” for society was it were, that would stabilize prices and demand for the commodity called “labor”. These would provide ways for people to earn the money that market society requires as a condition of survival rather than leaving it up to the “free” market, which has devastated so many communities across the world.

ADDENDUM: This is tangential, but one of the comments I sometimes see in discussions surrounding economics is something on the order of, “capitalism has always existed…”

Um, no. This is an incredibly ignorant statement, and oblivious to economic history. Yet it’s surprising how many people seem to believe it, or toss it off as if were some sort of obvious statement. It’s only possible if we define capitalism so broadly as to lose all meaning of the term. I thought this comment to an article at the Guardian did a good job of illustrating what’s wrong with this kind of reasoning:

Jesus. Capitalism involves people thinking rationally about their self interest therefore all instances of people thinking rationally about their self interest are engaging in capitalist modes of though.

Football is a sport that involves players running therefore anyone who is running is a football player.

Christmas is a religious festival marked by people exchanging gifts therefore whenever people exchange gifts they are celebrating Christmas.

Demographics Redux

A while back, I wondered if the low unemployment rate was down to simple demographics.

The reason I thought that was this chart:

Look at that peak. The peak plateaus from what I can tell, about 1958-1963. It then falls off a cliff and bottoms out around the year I was born – 1973, and never recovers, despite a very slight uptick in the late 1980’s (the “echo boom”)

While It’s harder to find data for other countries, here’s Canada. It appears to bottom out in the late nineteen-eighties:

In the U.K. it was more of a bactrian instead of dromedary hump, possibly due to the rationing years. But the cliff still occurs at approximately the same time:

Wikipedia has more data on it: Mid-twentieth century baby boom.

It notes that the baby boom was the strongest in Australia and New Zealand, which I didn’t know. It notes that in Europe, it was the strongest in France and Austria, and weakest in Southern Europe. It includes this chart for the United States:

Someone born in 1963 would be 1963 + 2020 = 57. That’s the youngest cohort. The oldest would be those born about 1947. 1947 + 2020 = 73. 55-65 is prime retirement age.

Note that we’re also hearing about how the stock market—where many retirement savings are invested—is at all-time highs. This puts retirement in reach for many baby Boomers.

This means that the people at the bottom of the trough would be my age and younger – 47. Again, note that while it increases slightly, the birth rate never recovers.

Now, I’m told that these are supposedly my “peak” working years. That is, workers in my age range are desirable – not too old, but old enough to acquire experience. But there are hell of a lot less of us than there have been in the very recent past!

That’s before you consider the demographic effects of increased mortality and incarceration which took such a huge toll on people who became adults after the sharp turn to the Right that began with Reagan in 1980. The “deaths of despair” began increasing after the year 2000. That means a lot of people are simply “missing” from the work force due to the tragedies of modern life. I’ve seen a heartbreaking number of people around me dying before their time–people in their thirties, forties and fifties.

When I first posited the idea, some readers had objections to it, and they made very good points. But I still thought that demographics must have some sort of impact, given the charts, above.

What brought this back to my attention was this article posted at TYWKIWDBI, which deals specifically with Minnesota, but is also relevant to the U.S more generally:

The Minnesota economy will also be squeezed as the last baby boomers retire in the decade. The 3 million-person labor force will essentially stop growing in the first five years of the 2020s, demographers say, and pick up only slightly after that.

This leveling off is already being felt across the state. Job vacancies have outnumbered the unemployed in Minnesota for two years. Businesses, governments and ordinary people find it’s harder to get things done. Hiring is especially challenging at restaurants, factories, schools and hospitals. Things aren’t delivered on time.

Such difficulties are mainly seen as effects from an economic upturn that, having started after the 2008-09 recession, has lasted longer than any other. But they’re also a product of the less-understood slowing of population growth…

This pattern is projected to continue into the 2030s and 2040s. There were so many baby boomers that their deaths from old age will offset the number of people being born or immigrating. By 2034, the Census Bureau says there will be more Americans over 65 than under 18. Similar challenges exist around the world. As populations become larger, it’s harder to sustain previous high rates of growth.

Population trends (TYWKIWDBI)

Which confirms the observations I made back then.

The precedent for the overall size of the economy has been set by the Baby Boom. Economies do not like to shrink, if for no other reason than to generate sufficient growth to pay back past loans. Thus, employers will continue to search for enough employees to keep the economy the size it was during the Boomer years with a shrinking pool of workers.

That’s excellent news for us workers, as it places a modicum of power back in our hands instead of being totally on the side of employers. There are the headwinds of automation and globalization to contend with, of course, complicating the situation. But if there is less competition for jobs, perhaps colleges will not be able to turn entire generations of Americans into indentured servants. They will lose their role as guardians and tollbooths to a moderately comfortable middle-class life, which would be a positive development, indeed.

This will, of course, be framed as horrible, horrible in the mainstream media, which is owned by major corporations and run for their benefit. Of course, the reason the media will declare this as a very serious threat is precisely because it benefits workers over capital and threatens to raise wages for workers–a process which is so far being fiercely resisted by big business.

There will be urgent and persistent calls for increased immigration in the mainstream corporate media, especially if there is a trend of rising wages. While Trump’s immigration policies are barbaric, if the average American sees the benefits of a tight labor market, Trump will (unfortunately) remain popular. the Left has really shot themselves in the foot by encouraging mass immigration for the past several decades, which decimated what used to be their base of working-class people, as the chart above shows. Instead, they counted on support from a small niche demographic of wealthy, upper-class urban professionals who did not have to compete with immigrants and benefited from the low wages in the service sector.

That came around to bite them. That’s not enough to win. Immigration was a wedge here in the States, and we saw it as a major wedge in the most recent U.K. election which gutted the Labour party. Both ostensibly “leftist” parties paid the price for abandoning their working-class constituencies by embracing the globalism touted by what’s often been termed the “professional managerial class” (PMC).

It should also be noted that recessions generally take a decade to shake out “naturally” that is, without the appropriate Keynesian counter-stimulus. Since the meltdown began in 2008; 2018 would be when we would expect the economy to start recovering in earnest, and this is pretty much what occurred. Again, unfortunately this occurred when Trump was in office. Of course, he’ll tout his tax cuts, but there is zero evidence that this has had any effect besides enriching the donor class. The quality of the new jobs is another matter as well, but that’s for another time.

So natural economic cycles have joined with demographics in the U.S. to encourage a relatively favorable economy for average people, at least for now. Trump will run out in front of the parade and claim he is leading it. The challenge for the rest of us it to ensure that the means of preventing rising wages and worker empowerment employed by corporations will not be effective. That will give us some ability to make things better in the future.

Incidentally, the population rate in the U.S. continues to decline, most likely because the demise of the middle class: US has slowest population growth rate in a century as births decline (The Guardian)

Housekeeping Notes

Over the New Year I changed the WordPress theme. I felt the old one was getting a bit stale. While I liked the banner picture up top, I felt it was taking up too much screen real estate.

I like the cleanliness and overall layout of this WordPress theme. There are couple of issues, however. For one, the font is a bit too big, but you can adjust this in your browser. If you have a scrolling wheel, CTRL+Scroll will shrink the size.

More pressingly, the new font  prevents me from bolding parts of the quoted text. Overall, I like how quoted text is broken out, but I’m used to bolding specific passages of interest. I’ve been told by readers that this is helpful. So it’s kind of big deal that I can’t do this anymore.

I’ve also noticed that subheadings are coming in at the same font as the title, instead of smaller. This can be confusing, since subheaders make it look like multiple posts.

Theoretically, if I can get access to the cascading style sheets for this theme I can tweak it. But so far I’ve been unable to figure out how to do that in WordPress. I may have to pick a different theme entirely if I can’t figure out how to tweak this one.

Anyway, apologies to readers for these issues. Hopefully I can resolve them quickly. Normally I like to write new content instead of fiddling with WordPress, but I guess I’ll have to bite the bullet and figure this stuff out. Thanks for your patience.

UPDATE: the new theme appears to correct many of these problems. Let me know if there are any other issues. Thanks!

The Legal Nature of Money

Last time we looked at the Book The Code of Capital. The big idea of that book is that what constitutes capital – wealth generating assets – is the way they are encoded by laws. Law is what flips and asset from just another idea or piece of paper into something you can make money off of. And now theoretically anything can be coded as capital – even one’s own labor.

But law is intimately involved in more than just capital. It’s also behind what constitutes money.

We think of money as some neutral thing. But underneath it lies the same system of laws and regulations that creates capital. Certain types of money take precedence over others. Certain types of money have a higher claim on real resources, and those claims are determined by the state via law.

I’ve read about that idea before, but it was reinforced by this podcast with Rohan Grey, an attorney originally from Australia who writes about the money system. Here his is on the essential legal nature of how we order and design our economy and economic transactions:

[10:01] Rohan Grey: “I sometimes think of the scene in The Matrix where he’s kind of looking at everything and [seeing] the the green lines of code [behind everything].”

“Not that I would recommend anybody go to law school, but one of the things that is good about going to law school is that it trains you to see the legal ‘code’ behind almost every issue.”

“You know, you’re walking down the street and you see something, and go, ‘wow, that’s a lawsuit waiting to happen.’ Or you see someone putting up electrical wires, and you think, ‘I wonder who approved the local council ordinance for that to happen this way,’ or something.”

“So there’s just so many different things where, to borrow a line from the poet Rilke, ‘Don’t be confused by the surfaces; in the depths, everything is law.’

And so when you think about what money is, there’s a lot of times where people get hung up on either the physicality of it–whether it’s paper, or a coin, or a blip on a computer screen. They think money is the thing that they can point to, or the thing that they can hold.”

“There are other people who think that, very crudely speaking, money is what money does. If something is a store of value; if something is a medium of exchange, it is money.”

“Whereas, I think what we would say is that money is first and foremost a *relationship*. It’s a statement of social relations between people. And those social relations are structured by law and legal dynamics.”

“To give this an example or analogy, when people talk about what property is–property isn’t the thing…If I had a house, that wouldn’t be my property. My property would be the legal title to the house. And that wouldn’t mean that I suddenly owned the house. It would mean that I had a set of legal rights, and legal claims, against other people.”

“So when we talk about the property right to a house, the first thing that comes to your mind is a house. What we’re really thinking about is the relationship between me, and everyone else in the world, with respect to the house. It’s that web of invisible filaments between me and the state, between me and you, between me and other people who inhabit the house. That web is what the property right is, not the house itself.”

“So, to take that idea to money, it doesn’t matter whether we’re taking about a coin, or a paper note, or an account entry on a balance sheet, or a computer blip on a screen. The essence of money is the legal relationships that are structured between me and other people with respect to some sort of instrument, or some sort of monetary value.”

“And that relationship can be structured in different ways. It can be a form of private credit. So if I ‘owe you one,’ in the kind of broad favor sense, that might not be money. But if I owe you *one dollar*, and you know you can take me to court over that, and then you can take that IOU that I have for you…and give it to somebody else, and that person can take me to court, than that might be money. So something that started off as a sort of personal, informal favor, can become money once it gains certain legal properties.”

“Another kind of money–or the kind of money that the MMT story starts with and thinks is the most central to modern societies where most relationships go beyond the people that you know by name…the dominant form of money is the money that the state issues and says, ‘we will accept this for any legal debt that you owe to us, or to other people.'”

“So the way that MMT boils that down for the average person, is to say, ‘taxes drive the value of government money.’ Out of all the kinds of money that could be out there; out of all the kind of private credit relationships…the most important is the one that the state says will be acceptable for its own IOUs–its own debts to itself: taxes, court judgements, criminal fees and fines, [penalties]–that kind of money has the most wide acceptability because everybody knows that at some point they might incur legal damages. They might be sued. They might have to pay taxes. They might have to pay some sort of fee or fine. And if *they* don’t have to pay, someone else is going to have to pay, which means that if they accumulate some money, at the very least they’re going to be able to offload it to somebody else who needs it.”

…There’s only two things in life that are certain: death and legal liability risk. Even if you think you’re off-grid, even if you’re living in the Canadian wilderness and hunting bison with a bow and arrow…somebody could come along and get in an altercation with you, and the next thing you know you’re getting a court summons because they’ve sued you for hurting them…So it’s almost impossible to image a world where you’re not at risk. Even if you don’t have an *actual* bill from the government due tomorrow, you’re at risk of facing a bill from the government, even if you don’t have to pay taxes.
So that idea that at some point you might find the need to pay some sort of legally-denominated debt means that you–and anybody who is in a similar postion–is going to want to make sure you have access to some of the money that can pay that debt.

Which means that the best way to think about that money…is that it’s a tax credit, or that its a legal credit. And therefore, any instrument–whether it’s virtual or physical–that legally is recognized as being a tax credit, is going to have some degree of money[ness].

In Rohan’s telling, rather than money in the abstract, the fundamental social nature of money is taken into consideration. This is eliminated in standard economics curricula, which assumes everyone to be atomized strangers to everyone else, with no prior dealings.

We saw that even with “primitive money.” Earlier we looked at the essay “Primitive Money” by George Dalton. There we saw that “primitive” money isn’t used so much as a means of exchange for settling spot transactions between strangers, as it is in our culture. Rather, money is used as a means to discharge social obligations between people in a society.

Primitive money performs some of the functions of our own money, but rarely all; the conditions under which supplies are forthcoming are usually different; primitive money is used in some ways ours is not; our money is impersonal and commercial, while primitive money frequently has pedigree and personality, sacred uses, or moral and emotional connotations. Our governmental authorities control the quantity of money, but rarely is this so in primitive economies.

There are a couple of reasons why this is so. For one, there are very few “strangers” in traditional societies. For another, markets are not very important to society. They are tangential places of exchange, but they don’t order social relations, nor are hypothetically “self-adjusting” markets the sole means of resource distribution.

In our society, our social relationships tend to be structured primarily by money transactions in markets. There are exceptions of course—we still have families, after all. We can think of citizenship as another way to structure social relations between people.

But in traditional societies, social relationships are not structured around money. They are structured by other things—usually kinship. Money is simply one of the means of discharging one’s social obligations.

Some common social obligations are weddings and funerals. Bridewealth and dowries are a couple of examples. But delict and crime is another example, and one which which illustrates the social nature of money. In these cases, what constitutes payment will be determined by the authorities, and how much is required for the settlement of various offenses will also be determined by the relevant authorities. Ancient legal codes had a schedule of payments from one group to another based on the offense committed.

Relating to Rohan Grey’s argument above that the necessity of having a means of settlement with the authorities establishes the type of money that is most in demand, in many cultures what constitutes the acceptable means of settlement becomes the first type of money. In some cultures this is cattle; in others it might be shells, or metal coins, or whatever. Things than become priced in whatever that is, and a range of equivalencies are created (5 goats = 1 cow, etc.).

We’ve just abstracted so much that we’ve lost sight of this relationship.

If any means of settlement between two people or groups of people constitutes money, than we have an awful lot of different types of money. How do we differentiate them? The following are taken from a paper by Stephanie Bell entitled “The Hierarchy of Money.

In [G. F.] Knapp’s treatment, all money represents a Chartal means of payment. That is, all money is a ‘ticket’ or ‘pay-token’, which gains validity by proclamation that it will be accepted as a means of payment. These ‘tickets’ or ‘tokens’ which individuals/institutions have proclaimed acceptable as a means of payment do not become money until they have been accepted by another individual/institution.

Going back to Keynes, then, a great number of ‘things’ will answer to the ‘description’ or ‘title’ of money. That is, every plane ticket, pre-paid phone card, movie ticket, subway token, etc. is a form of Chartal money. It will, therefore, be useful to narrow our focus and to proceed with a simplified discussion of ‘the hierarchy’.

This is where the concept of a hierarchy of money comes in. And that hierarchy is once again determined by laws and legal institutions.

..a money’s place within the hierarchy depends on the degree to which it is accepted by society…

…the ‘hierarchy of money’ can be thought of as a multi-tiered pyramid where the tiers represent promises with differing degrees of acceptability. At the apex is the most acceptable or ‘ultimate’ promise. But if all promises are denominated in the same unit of account, why are some deemed more socially acceptable than others? Whose promises will be the most acceptable? And why would anyone agree to hold the relatively less acceptable promises?

The paper than goes on the list the numerous relationships structured by debt, and where they sit on the tier of money. The bottom tier consists of the debts of firms and households. The top tier consists of the debts owed to the state. The reason why the state’s debts rank higher than others is because the means of settlement with the state do not have to be converted into anything else in order to be valid:

To get business and household debts accepted, they might be made convertible into the debt of someone higher in the pyramid and may also require interest payments to compensate for the risk associated with holding less liquid assets…Unlike households and firms, state promises and certain bank promises would be accepted even if they were not convertible into anything else…Likewise, the state’s promises do not depend on convertibility into anything else…Recall that As the ‘decisive’ money of the system, both the state’s promises and banks’ promises rank high among the monies of the hierarchy…the legal obligation to pay taxes and the state’s proclamation that it will accept its own currency at state pay-offices elevate the state’s liabilities to the top of the pyramid, rendering them the promises with the highest degree of acceptability.

It concludes:

In short, not all money is created equal. Although the government, banks, firms and households can create money denominated in the social unit of account, these monies are not considered equally acceptable. Only the state, through its power to make and enforce tax laws, can issue promises that its constituents must accept if they are to avoid penalties. The general acceptability of both state and bank money derives from their usefulness in settling tax and other liabilities to the state.

What makes the capitalist system unique is that the debts between individuals can be monetized—that is converted into the state’s ultimate means of settlement, thus expanding the money supply. Again, this is entirely a creature of law. As Pistor pointed out, the state’s money unit can be used to structure horizontal relationships between individuals, and not just vertical relationships between the citizens and the state:

The test of ‘moneyness’ depends on the satisfaction of both of two conditions. First, the claim or credit is denominated in an abstract money of account. Monetary space is a sovereign space in which economic transactions (debts and prices) are denominated in a money of account. Second, the degree of moneyness is determined by the position of the claim or credit in the hierarchy of acceptability. Money is that which constitutes the means of final payment throughout the entire space defined by the money of account. Pigou’s ‘money’ was ‘proper’ not simply because it was backed by gold, but because the state pronounced the abstract money of account and established its exchange rate with gold.

A further important consideration is the process by which money is produced. Credit relations between members of a giro for the book transfer and settlement of debt were, as Innes observed, extensively used as early as Babylonian banking. However, these credit relations did not involve the creation of new money. In contrast, the capitalist monetary system’s distinctiveness is that it contains a social mechanism by which privately contracted credit relations are routinely ‘monetised’ by the linkages between the state and its creditors, the central bank, and the banking system.

Capitalist ‘credit money’ was the result of the hybridisation of the private mercantile credit instruments (‘near money’ in today’s lexicon) with the sovereign’s coinage, or public credits. The essential element is the construction of myriad private credit relations into a hierarchy of payments headed by the central or public bank which enables lending to create new deposits of ‘money’ – that is the socially valid abstract value that constitutes the means of final payment.

Credit and State Theories of Money by L. Randall Wray et. al., pp. 214-215

This is actually quite important in understanding how money not just as an abstract thing, but a way of social ordering created and reinforced by the state. Thus, you cannot have a market economy without a very specific set of laws and legal institutions established by states. And since the state is intimately involved, it makes no sense to argue that the state should somehow “get out of the way” of the market relations it established in the first place through laws, as libertarians argue for.

The Code of Capital

An interesting book came out earlier this year by law professor Katerina Pistor called The Code of Capital. The book explains in detail the qualities that an asset has to have in order to generate wealth over time, and how the law can bestow such properties on an ordinary asset to turn it into a capital asset.

In other words, capital is coded via law, and a set of private legal institutions have been set up and used for centuries in order to to code certain things into capital. The first asset coded this way was land, but the code has been extended to more and more things over time, and it continues to be expanded.

I think the book is important because the standard libertarian argument relies on a misunderstanding of things like private property, money and capital as being somehow “natural” and a priori to the state.

But in reality, all of these things are created by human institutions, most particularly the state with its monopoly on coercive power. They are than artificial creations designed to arbitrarily privilege some groups over others.

Pistor’s book focuses on capital, and her conclusion is that the institution which creates capital is the legal system. In the book, she attempts to document exactly how this is done. It also has some interesting historical insights into how capitalism emerged out of feudalism with much of the original power relations more-or-less intact.

Capital is an asset that has some potential to generate private wealth. In order to create a capital asset out of an ordinary asset, Pistor argues that you need to do some legal encoding of that asset. This encoding of an asset—grafting onto an asset a particular set of ideas enshrined in the law—is what she refers to as the “code” of capital. The “code of capital” is what flips a simple object, idea, or promise to pay into a wealth-generating capital asset.

Over time, the code of capital has transformed more and more things into income-generating assets. The code was originally developed with respect to land to preserve the wealth of landlords from challenges to their power. Over time, these legal codes have been used to make capital assets out of things like financial instruments, debt and intellectual property.

Pistor argues that you need to encode three out of four of these concepts to flip an ordinary asset into a capital asset. They are:

1. Priority – Having more senior rights to an asset than other people. This is the fundamental rule you need in order to create a capital asset, or even to have private property.

Legal institutions create ranks and prioritize some claims over others. This matters in insolvency, for example, where claims are ranked from strongest to weakest. The strongest claims feed at the trough first, while the weaker “runt” claims get the leftovers, if anything. How those claims are ranked is determined by the law.

Priority rights are the minimum requirement. But to create a capital generating asset, you need more.

2. Durability – If an asset can end up easily on the auction block, the ability to accumulate wealth over time is limited. Durability protects assets and asset pools from too many counterclaims. It extends priority rights in time.

Durability was fist established using the entail in English common law to protect assets from being seized by creditors. See Fee tail (Wikipedia)

3. Universality – Priority and Durability need to be enforced against everyone, not only against the parties with whom you have directly negotiated these interests—erga omnes in the legal terminology.

Universality gives lawyers the ability to create assets that have priority rights that will be universally enforced not just against the contracting parties, but against anybody, whether or not they knew about the arrangement or were parties to the deal. Universality extends priority rights in space.

4. Convertibility – This allows you to not just be able to transfer an asset, but also to flip the asset into another, safer asset if necessary.

This is most obviously comes into play by turning an object into cash. State-issued currency retains its nominal (though not its actual) value. The ability to do this is especially critical in giving financial assets durability. Without being able to “cash out” paper assets, they might lose much, or even all, of their value. As Pistor describes, convertibility was especially critical in turning things like CDOs and other debt-based securities into capital assets.

When these four legal ideas are grafted onto an asset–any asset–it becomes an income-generating asset, that is capital. Any object, promise of payment or idea can be flipped into a capital asset using these legal tools–this “code”

A lot of these were first developed with respect to land, and then were grafted onto other assets: land, farms, debt, firms, know-how, and even data, with data being the most recent and ongoing.

The legal modules to do this have remained fairly stable over time. She enumerates them as: Property law, Contract law, Corporate law, Collateral law, Trust law and sometimes Bankruptcy law, which can mimic features of all the others.

This is also provides a useful definition of what capital is. Obviously, capital is the beating heart of our economic system. Yet, remarkably, people still argue over what it even is!

Marx entitled his masterwork Captial, and placed it at the center of his analysis. Following him, the overall economic system has been termed “capitalism.” But a solid definition of what does and does not constitute capital has remained elusive. Using the above, we might define it as an asset—physical or otherwise—that possesses three out of four legal properties of priority, durability, universality, and convertibility.

Economists often claim that the central factors of production are land labor and capital. But land and labor can be capital. In fact, anything can be capital if it is coded as such. She points out that one’s own labor can be coded as capital by establishing a corporate entity and issuing dividends to yourself as a corporation shareholder in lieu of a salary.

What this demonstrates that the law is intimately involved in the creation of capital.

“What are the functions that law plays? What you need to convert an asset into a capital asset is a credible commitment of enforceability. You want to make sure that you can enforce your rights at some future date in some place, and maybe even in some place outside your own jurisdiction. You need to have the institutionalization of the centralized means of coercion that private parties can use to organize their private affairs so that they can bank on enforceability. At some level, at every stage in the creation of capital and the creation of financial markets, I would say in the creation of markets in general, the state is deeply involved.”

But what do we mean by law? Law is a particular institutionalization of the central state’s coercive powers. Pistor distinguishes three dimensions in which we have institutionalized law:

1. Top-down vertical ordering – the state enforces order among its citizens through a monopoly on coercive violence.

But the flip side of this vertical dimension is that, in rule-of-law based constitutional systems, citizens can also use the law to protect their interests against the state. This aspect is often ignored. Top-up vertical ordering allows private actors to supersede the state; to “tie its hands” as it were.

The centralization of the means of coercion on the one hand, and the allowing of individuals to avail themselves of the legal system to protect their private property rights against the state through civil and political rights, was an enormous institutional revolution.

2. Horizontal ordering – Private parties can employ the coercive properties of the state to organize their own private affairs. This means that private relations can be structured much more forcefully than they otherwise could be by private parties availing themselves of the state’s legal system.

Pistor traces this legal coding all the way back to the thirteenth and fourteenth centuries in England. For whatever reason, England developed a very powerful private legal profession very early on. Wealthy landowners commonly availed themselves of legal services provided by professional attorneys much more often then their continental counterparts. On the continent the legal profession was less empowered. France controlled them in a top-down fashion, and Prussia halved the private legal profession in the eighteenth century because they were seen, correctly, as a threat to state power.

As Pistor depicts it, many of these laws originated for the benefit of the large, aristocratic landlords to, in essence, to preserve the power relations under feudalism. They enshrined these pre-modern, pre-legal, pre-constitutional power relations into law. The lawyers who did this were often descendants of these very same aristocratic landowning families, so its obvious whose side they were really on.

This adds an interesting perspective on the libertarians’ “year zero” problem. They usually argue for a “night watchman” state that only protects private property rights and lets everything else just sort of work itself out. But where do these property rights come from in the first place? Why do some people have priority over others?

Matt Breunig makes this same point:

Perhaps the most interesting thing about libertarian thought is that it has no way of coherently justifying the initial acquisition of property. How does something that was once unowned become owned without nonconsensually destroying others’ liberty? It is impossible. This means that libertarian systems of thought literally cannot get off the ground. They are stuck at time zero of hypothetical history with no way forward.

How Did Private Property Start (Jacobin)

Pistor’s book fills in some of the gaps in that process. It was through law that such priority rights established, and legal decisions usually favored certain stakeholders over others. These decisions can in no way always be said to be “fair.” Rather, she argues that they come from whatever legal arguments happen to carry the day in court.

During the sixteenth-century in England there was a legal dispute over who had better property rights to the land—the landlords or the commoners. Who had priority rights over the commons, the peasants or the aristocracy?

The commons was an area where multiple, overlapping stakeholders had multiple, overlapping claims, and those claims were balanced against one another for centuries by traditional customs without written legal precedent determining ownership. So whose claims would take priority if one side defected, and whose claims would be downgraded, or even dismissed?

At first, this dispute was conducted in a decentralized fashion in hundreds of sporadic conflicts all up and down the British Isles. The landlords attempted to assert their rights over the commons. The commoners rebelled. There was violence, breaking fences, and digging hedges. The “Diggers” were so-named because they dug under the fences and hedgerows planted by landlords to mark their territory.

Eventually, these disputes wound up in the courts. The attorneys—by and large children of the nobility—argued on behalf of the landlords. The landlords won. The argument that carried the day in the court was seniority—the landlords had the stronger claims to the land because their rights took precedent. In essence, they were there first. Another strike against commoners is that they were not organized into a single, coherent corporate entity, so unlike the landowner, they could not assert collective rights. They were simply seen as numerous private individuals by the courts. More recent scholarship has shown that by the early seventeenth century, already two-thirds of arable land in England had been enclosed even before the major Enclosure Acts were passed.

These decisions gave the landlords priority rights. But you also needed to have shielding devices to create sustainable value.

To this end, the landed elite in England learned how to entail their land to preserve it down through time. Lawyers took a page from feudal law and argued that the contracts that potential creditors had entered into were with the “life tenant” rather than the person who gained the profit off the land. The life tenant, however, was not the real owner, they said—he only holds the asset for future generations. Under the feudal law, this meant that you could only seize 50 percent of the land, and never the family mansion.

Entailment gave English landlords durability. When the land was no longer able to generate sufficient revenue thanks to the repeal of the Corn Laws and the flooding of the market with cheap grain, the landlords could shield themselves from creditors and keep land in the family. This caused a debtor crisis by the mid-nineteenth century. In 1881 the English courts declared that the the life tenant was the true owner and therefore creditors could seize all of the land. After the Land Settlement Act and the Land Conveyance Act were passed, almost 20 percent of land changed hands. The repeal of durability greatly affected the value of land as a capital asset.

When England started seizing lands from aboriginal peoples all over the world, obviously the “they were here first” argument wouldn’t hold water. So the attorneys switched up their arguments to improvement and discovery. Improvement is the argument made by John Locke, i.e. you combine your labor with the land to make it productive, so that gives you ownership rights to the land. Discovery is a sovereign territorial claim. It boils down to, essentially, “finders keepers.”

Note that this is the inverse of the priority rights that were argued during the enclosure movement. Under the feudal system, it was by-and-large the labor of the commoners that brought forth the fruits of the land. Yet back then, this gave them no special claims to ownership! Landlords had some legal and administrative duties back during the feudal era, but the Crown (i.e. the state) had largely taken over those functions by the time these disputes showed up in the courts. Thus, landlords contributed very little labor to the land, and yet they claimed exclusive ownership rights over it, and won in court!

And, of course, how does the labor theory of property apply to a financial asset?

In other words, what justifies one’s claim to an asset appears to be whatever the apologists for those with power argue it should be. And that obviously favors those already with the power.

Pistor highlights the fight by the indigenous Maya to encode their collective use rights as property rights in the Supreme Court of Belize. The constitution of Belize—as most constitutions do—says that property rights will be protected. But it does not define what counts as property—it simply assumes it. The supreme court of Belize eventually recognized the priority claims to the ancestral lands by the indigenous Maya. But what they didn’t do was use the state’s coercive power to back up those claims. Instead, Mayan land continued to be bought and sold by outsiders.

In telling this story, Pistor’s core point is this: what we recognize and what we do not recognize as property is a political decision that we make. In making these decisions the state tends to favor the rights of those who will generate more wealth for the state.

Since the end of the nineteenth century in Britain, we’ve shifted from protecting the landowners and their capital to protecting the credit claimants. By elevating creditor claims above all other claims, we have allowed financialization to occur. This has subsequently engendered all the “exotic” financial instruments based on debt that we see circulating today.

Pistor goes into detail about how these legal coding techniques were used to turn exotic financial instruments into capital assets via law. In doing so, the features of durability, universality, and convertibility became paramount in turning paper claims into capital assets. In fact, it was in trying to understanding the exotic financial instruments underlying the global financial crisis that she discovered the code of capital. For example, the “code” allowed the mortgage debts of millions of ordinary homeowners were turned into capital assets that could be traded and used for wealth generation. Her arguments here are fairly complex, so if interested you should look further into the book.

An important point she makes is that land and other tangible, usable objects are still usable in a certain way even without any legal coding. You can grow crops on land. You can drive a tractor. You can milk a cow.

However, intellectual property rights and financial assets only exist in law. These are entirely creations of the law.

And so, she notes, we’ve created a legal system where we create brand new assets ex nihilo through the law, and then further enhance these assets with the additional attributes of priority, durability, universality, and convertibility to turn them into wealth-generating assets. Of course, this benefits certain people over others.

Finally, she addresses the issue of universality. Private law is domestic law, but we live in a global capitalist system. And so how can you have domestic law sustaining a system of global capitalism when we don’t have a global state?

Pistor argues that as long as all global states choose to recognize the features of a specific legal system, you can, in theory, have legal universality even without a single, global legal system. For financial capitalism, the legal systems that currently serve this purpose are the laws of England, the state of New York, and Delaware for corporate law. Thus globalization turns out to be a very parochial system of coding rooted in just two legal systems! This gives Anglo-Saxon firms a legal advantage in crafting these types of assets, including financial assets. The world’s largest and most powerful law firms are all headquartered in Anglo-Saxon countries, where most of the legal coding work is done.

“The globalization of legal practice which is the very foundation of our global capitalist system is ultimately a globalization of Anglo-Saxon, particularly American legal practices.”

What started centuries ago in land has been extended to corporations, to financial assets, to intellectual property rights, to data, and potentially to many more things. As she notes, even exotic things like DNA are being eyed as potential capital assets.

As a result, citizens of various states increasingly feel as if they’ve lost control of their own domestic destiny. With everything around them being rapidly turned into capital assets for international markets left and right, they feel helpless. They feel that collective self-governance has fallen by the wayside under this system. Pistor points out that Brexit was rooted in the idea that the people have lost their legal sovereignty. She argues that this perception was essentially correct, but it was not really a takeover by Brussels (the EU headquarters) but more accurately a takeover by London.

The Code of Capital illustrates that the neofeudal order that is coalescing today is not some inevitable force of nature, but an imposition of a specific legal code on all of us to turn the entire world into capital assets owned and traded by an international oligarchy of wealth, while local communities are steadily hollowed out. It is the endgame of global capitalism; the final gutting of civil society. Despite the assertion of neoliberals and libertarians, there is nothing “natural” about it. It is blatantly obvious for whom the state’s monopoly on coercive violence is now serving, and its not the citizens of the world’s various counties, but a transnational investor elite.

Ive taken the above information from YouTube talks and interviews given by professor Pistor.

Talk at the Watson Institute:

Talk in Brussels:

Majority Report interview:–ck

Book review from the London School of Economics

Christmas used to be just the way we lived every day

I’ve been studying a lot of history, and what I’ve read over the past few years has led me to the conclusion that the ideals we associate with the Holidays here in the Anglo-Saxon world—spending time with our loved ones, taking a break from our labors, feasting, singing, dancing, and reveling; helping the poor, lonely and downtrodden; charity, fellowship and brotherhood; putting aside conflict and working for peace; cooperation; open-handed generosity; and just basically celebrating human-centered values….

…These were one just the way we acted 365 days a year.

In other words, before Capitalism, Christmas was year round.

Once upon a time, we all lived in gift economies. This was what “economic” behavior consisted of, not buying and selling with gold coins in impersonal markets, and certainly not trying to “profit” from the people in your community whom lived and worked with every day. This fact has been conclusively demonstrated by anthropology.

At Christmas, we revert to this atavistic gift-giving economy, with only conscience and sentiment compelling our behavior rather than necessity or rationality. It is not our instinctive nature to “truck and barter” as Adam Smith had it; it is our nature to be reciprocal, as even lesser primates demonstrate. We do not like to be only receivers; we like to be givers as well. Of course, we so today in the context of a wider capitalist economy (we still buy our presents from corporations, by and large). But the fact that we mark out a special time to revert to gift-giving is fascinating, and tells us something, I think. If there is some “natural” economic behavior based in human nature, then I believe this must be it.

Before the advent of Capitalism, European society was organized around the values promoted by the Catholic church. Of course, these were often honored far more in the breach than in the observance. Men aren’t angels, after all. But this was seen as the ideal for human society and human behavior, even if it was practically unattainable due to our fallen nature. Our fellow Christians were our brothers, and even charging interest on loans was forbidden.

Now, the ideal for human society—its lodestar, if you will—is making money, consumerism, and maximizing stock valuations and the Gross Domestic Product.

This was the “disembedding” of the economy from human-centered social values that I’ve talked about so often based on the ideas of Karl Polanyi. What became recast as “economic” behavior became permanently divorced from the moral order or pro-social behavior. Activities that would have once been looked down as immoral and sociopathic upon became celebrated above all others (such as raising the price of essential medicine for sick people). There once was such a thing as a moral economy.

Now we were expected to behave according to the cold, hard calculus of market logic; that is, rationally, selfishly and hedonistically. What was good for me was paramount; what was good for thee—or for the society—was not, even if some pseudo-philosophers like Bernard de Mandeville insisted that they were actually one in the same.

This, then, became the ideal, not Christian charity or brotherhood. Even more extraordinarily, such behavior became recast as man’s “natural” state—just the way humans are. And this is continually emphasized even today by evolutionary biologists who claim indisputable recourse to timeless scientific truth. Hence, any behavior which deviated from  this norm—like charity, giving stuff away, renunciation, or working less—is, by definition, “unnatural”; aberrations  marring a nominally “rational” human species. The conception of man as a social being defined in relation to his fellow man—as well as to the broader society in which he was embedded—was abandoned as a quaint, old-fashioned relic of ignorance and superstition. We were now all self-sufficient Robinson Crusoes, each washed ashore on our own private island.

And that’s where we are today. I think we’ve forgotten that it ever used to be another way.

However, during Christmas, we are granted “permission” to deviate from this behavior just a little bit—and for a limited amount of time. A “pass” as it were, to abandon the cold, hard logic of the market ethos to give stuff away, to be freely generous, to not care so much about money, to spend time in “nonproductive activities” (like caroling and decorating), and to just basically have fun without an overseer constantly looking over our shoulder (let the brandy flow!) When your neighbor invites you over for Christmas dinner because you are all alone at Christmas, they do not present you with a bill afterwards (well, some hard-core Ayn Rand supporters might, LoL).

The Holiday season allows us just a brief period to act in the ways we used to do all the time.

It’s at Christmas when we take a short break from the capitalist ethos and turn back the clock to the values we used to have the whole year round. To what it meant to be a member of a society. back to the days when the prevailing ethos was inspired by the Church instead of the Market. In fact, during Christmas, these values are even celebrated— glorifying God, childlike innocence, caring for the lonely, sick, and less fortunate, merriment and good cheer, and just basically looking after one another. In other words, acting pro-socially.

When Market society came along, it stripped away all that. People were expected to behave “economically”, and that behavior became disembedded from society. In fact, we are compelled to act this way, regardless of our most deeply-held beliefs and sentiments.

And this “economic” behavior is quite different than what preceded it. The pursuit of individual gain became paramount. In turn, this required a whole new new set of values: hoarding, a certain callousness and indifference towards poverty, a nose-to-the-grindstone work ethic, a belief in “self-reliance” and looking down upon those with less as “lazy” and “charity cases.” Society became comprised of winners and losers—a zero-sum game played out in the competitive arena of the Market.

This was a profound shift that I don’t think we’ve really come to terms with deep down even today. Markets came along and colonized every aspect of human life, so it’s no wonder we need a break from itsometimes! And the Holidays have become that break for us in the capitalist world; that reversion to pre-capitalist values. A brief glimpse into what Charles Eisenstein once referred to as “the more beautiful world our hearts know is possible.”

But, the question is, do we truly act “unnaturally” during the Holiday season and “naturally” the rest of the year, as capitalist apologists wold have it? Or is, perhaps, it the other way around?

Sadly, it only lasts during the season. Then it’s back to “normal”. It’s back to the “every man for himself” ethos. Back to “basic human nature.”

I suspect this originated with Charles Dickens. It’s no secret that Dickens was a trenchant critic of the callous society that capitalism has engendered in the Anglo-Saxon world of his time. He used his writing to appeal to the human heart in a way that only a great writer can—to appeal to the values that had been overshadowed in place of those of greed and accumulation. He reminded us all of what it meant to be human. This is most apparent in A Christmas Carol, but these values permeate his writings. Dickens lived in a time when remnants of that older tradition still survived, albeit in isolated pockets. We probably have him to thank for giving us just a brief respite from the viper pit that is normal, everyday capitalist society.

It’s truly ironic that this exists alongside the orgy of crass consumerism and consumer fetishism that Christmas has become. But then, we are a bundle of contradictions, are we not?

I would suggest that if people enjoy the feelings of warmth produced by the Holidays, as so many do (suicide rates decline dramatically during the Holiday season rather than increase), then I advise people to remember that this used to be the way we lived all the time. And really, nothing is stopping use from living that way again, at least as individuals. To make a choice. To realize—if only at the personal level—the more beautiful world that our hearts know is possible.

And in that may lie the hope of a better world for all.

Merry Christmas, Happy Holidays and a Happy New year to all my readers!

Fun Facts, December Edition

The share of wealth held by the Forbes 400 more than doubled from $1.27 trillion in 2009 to nearly $3 trillion this year (2019).

The amount of taxable income for the wealthiest group of US citizens dropped from 27% in 2009 to around 23% this year, the first time they were effectively taxed lower than the nation’s working class.

Top 1%: Wages up 158% since 1979

Just 100 companies are responsible for 71% of global emissions.

Five Companies own 80% of all stock in S&P 500 listed companies

The Big Five tech firms have acquired more than 600 startups in the last two decades. Google alone has acquired one company per month, on average, for the past 17 years. This amounts to a disproportionate share of all startups acquisitions in the US.

The rate at which children are being admitted to U.S. emergency rooms for sexual abuse almost doubled between 2010 and 2016, new study finds. Researchers believe the rise could be related to increases in human trafficking.

There are now 2,101 billionaires globally – up almost 40% from five years ago

Dennis Ritchie; who invented the C programming language, co-created the Unix operating system, and is largely regarded as influencing a part of effectively every software system we use on a daily basis; died 1 week after Steve Jobs. Due to this, his death was largely overshadowed and ignored.

Ritchie and Kernighan (and the rest of the Bell Labs guys) are almost unknown to the public, despite creating the basis for modern programming and developing the foundations for all the software we use today.

Note that many of the people who *actually invented* the things that made the IT revolution possible are unknown to the general public. A lot of them worked in government labs, too. Instead, we’re treated to constant invocations of “Billgatestevejobs” (or, more recently, “Jeffbezoselonmusk”) by the media and politicians.

The guy who discovered insulin wanted it to be cheaply available to everyone, so he sold the patent to a university for only $1. They sold that patent to a pharmaceutical company that now manufactures it for $3 and sells it for $370 (A 12,000% markup). But only in America, where loopholes in patent law have allowed them to keep the patent alive long after it should have expired.
Too bad Americans only vote on guns and fetuses.

The wealth of the richest 1% of Americans is close to surpassing the wealth of middle class.

Technically, this means there is no middle class anymore. News flash.

A 2017 study by the Economic Policy Institute (EPI) found that in the ten most populous states, an estimated 2.4 million people lose a combined $8 billion in income every year to theft by their employers. That’s nearly half as much as all other property theft combined last year—$16.4 billion according to the FBI.

Over the course of the year, Americans collectively spent 70 billion hours behind the wheel—an eight percent increase since 2014.

In 1960, the median gross rent was $71, or $588 in today’s dollars. The current median US rent is $1,700.

Fake guns are banned in the downtown Las Vegas district, but real guns are OK.

With his Thanksgiving vacation, President Donald Trump’s golf hobby has now cost Americans an estimated $115 million in travel and security expenses ― the equivalent of 287 years of the presidential salary he frequently boasts about not taking.

The blood of poor Americans is now a leading export, bigger than corn or soy

Puerto Rico Has Lost 4% Of Its Population

Americans take fish antibiotics because it’s cheaper than a visit to the doctor.

One in every 200 people are homeless in England.

Nearly half of US residents to be ‘obese’ in 2030; 1 in 4 to have ‘severe obesity,’

Australia’s biggest forest fire has now destroyed an area seven times the size of Singapore.

Sigurd the Mighty, a ninth-century Norse earl of Orkney, has the dubious distinction of being the only person to have been killed by an enemy he had already decapitated several hours earlier.