What Germany Owes the Eurozone

The euro crisis has been with us for more than a few years now, but there are still some fundamental questions left to be answered before there can be stability in the eurozone. In case you haven’t been following the news out of Europe, here’s some brief background on the situation: After the introduction of the euro in 2002, eurozone member countries enjoyed historically low interest rates on their government bonds. Investors were willing to loan their money to these countries at such low rates because they believed there was an implicit guarantee behind every government bond denominated in euros. That is, they believed German bonds carried equivalent risk to Greek bonds because they were both members of the eurozone and therefore had similar default risks. Once the global recession hit in 2008, many countries in the periphery of the eurozone started to run substantial budget deficits (and revealed that past deficits were larger than previously reported), which quickly disabused investors of their one-size-fits-all approach to risk valuation. That lead to the increasing interest rates shown in the graph above. The rapid rise in interest rates subsequently made it more expensive for these governments to service their debt, adding to their already ballooning deficits. Larger deficits spurred higher interest rates, and the vicious cycle continued until the European Central Bank (ECB) took decisive action to put a ceiling on certain countries’ interest rates, temporarily halting the crisis.

Before the crisis can be permanently put to rest, Germany needs to decide how much austerity (i.e. spending cuts and tax increases) and reform it ultimately wants from the periphery countries, Greece, Italy, Portugal, Spain, and Ireland. In an ideal world for Germany, these heavily indebted countries would slash their national budgets quickly and severely to repent for their previous fiscal sins. Unfortunately for the Germans, and for the other strong eurozone countries, like Finland and the Netherlands, the first steps toward austerity in the periphery have been a drag on those countries’ GDP growth, further deteriorating their debt-to-GDP ratios and making already hesitant international investors more skeptical of their ability pay off their debts. Furthermore, German fears about hyperinflation, which they experienced during the interwar period of the 1920’s, have prevented the ECB from pursuing looser monetary policy. If the ECB were to announce that it would tolerate inflation closer to 4 or 5 percent, rather than 2 percent (see graph below), then periphery countries could inflate some of their debt away and, more importantly, real wages would fall more quickly, bringing them closer to their market clearing level.

Screen Shot 2012-12-21 at 12.42.13 AM

As of November 2012, unemployment in the eurozone was 11.6%, while Spain and Greece had 26% and 25% unemployment, respectively. With such a large percentage of their labor forces sitting idle, these countries are failing to achieve their potential economic output, which only exacerbates their debt problems.

Germany is determined to make the budgets of the periphery countries as austere as possible to atone for their past profligacy. This seems somewhat reasonable, given that those countries committed to running small deficits as a condition of joining the eurozone. But if fiscal consolidation is a rational demand to make of these countries, that only leaves looser monetary policy as a way to prevent a breakup of the eurozone. The important question is this: why should Germany, especially given its intimate history with hyperinflation, tolerate a rise in the inflation rate? Put simply, it must do so because it has benefited the most from eurozone-wide low and stable inflation. Prior to the creation of the eurozone, countries like Italy, Greece, and Spain would devalue their currencies to make their exports more competitive in the global marketplace. German workers are notoriously more productive than their counterparts in Southern Europe, so currency devaluation served as a convenient way for those countries to level the playing field in the export market. Now that there is a single currency, that option is off the table.

The New York Times recently ran an op-ed by Gunnar Beck that made the argument that Germany did not benefit the most from the creation of the eurozone. His main data regarding the import/export market, which we would expect to be most affected by the move to a single currency, was this:

“German exports rose most — by 154 percent — to the rest of the world; by 116 percent to non-euro E.U. members; and least of all, 89 percent, to other euro zone members. In 1998 the euro zone still accounted for 45 percent of all German exports; in 2011 that share had declined to 39 percent.”

Though this information may appear to imply that Germany does not in fact owe the other countries in the eurozone anything, it neglects to account for the most significant global economic trend of the past twenty years: the rapid and sustained growth of the emerging economies (e.g. Brazil, Russia, India, China, etc.). Those countries, and many others included in the “rest of the world” category, have experienced years of double-digit growth in the past couple decades. When you measure growth in German exports to countries using 1998 as the base year, of course the increase in exports to the rest of the world would be larger than the increase in exports to other eurozone countries — the rest of the world was growing extremely fast!

The data put forth by Mr. Beck obfuscates the real and significant benefit Germany receives from the euro: control of a significant number of its trading partners’ inflation rates. Countries like Greece can no longer run double-digit rates of inflation to make their exports more competitive relative to Germany’s. To give you an idea of how dramatic of a paradigm shift this is, look at the chart below of Greece’s inflation rate history:

chart_1 (2)

Prior to its adoption of the euro, Greece depended heavily on currency depreciation to offset the low productivity of its labor force and to maintain the competitiveness of its exports on world markets. Now Greece, and every other country struggling with burdensome debt levels, is committed to 2% inflation forever because Germans neither need nor want the price level to rise more rapidly. But if Germany wants to safeguard the progress made toward European unification, it needs to let the ECB pursue a higher inflation target, possibly in conjunction with an unemployment target similar to the one adopted by the Federal Reserve this month. Germany owes the eurozone at least that much.

The Market for Lemons — and Financial Products

Elizabeth Warren recently won the Massachusetts Senate election against incumbent Scott Brown, receiving about 54% of the vote. One of the key issues in the race was how the federal government should regulate the finance industry, specifically with regard to Dodd-Frank and the Consumer Financial Protection Bureau (CFPB), Warren’s brainchild. Senator Brown favored giving Wall Street more leeway in regard to government oversight, a position which earned him significant campaign contributions from the finance sector. Warren, on the other hand, is a liberal academic who is well known for her criticisms of Wall Street. Most notably, she formally put forth the idea for a CFPB-type government agency in a 2007 article for Democracy: A Journal of Ideas, an article that launched her into the political arena.

I was first alerted to the article by Ezra Klein via Twitter on election night, and after reading Warren’s work, I am happy to see that she won election to the Senate. The piece is a cogent argument for why we need to regulate financial products just like we regulate any other products —  to protect consumers from harm. The crux of her argument is easily grasped from the following excerpt (though do read the whole thing):

Consumers can enter the market to buy physical products confident that they won’t be tricked into buying exploding toasters and other unreasonably dangerous products. They can concentrate their shopping efforts in other directions, helping to drive a competitive market that keeps costs low and encourages innovation in convenience, durability, and style… Just as the Consumer Product Safety Commission (CPSC) protects buyers of goods and supports a competitive market, we need the same for consumers of financial products–a new regulatory regime, and even a new regulatory body, to protect consumers who use credit cards, home mortgages, car loans, and a host of other products.

Warren goes on to give a brief history of consumer finance and how the industry has become more complex over the years. Though her argument in favor of tougher regulation is persuasive and well thought out, Warren misses the key economic insight on which her argument is based: asymmetric information. Economists say that a market has asymmetric information whenever the buyer or seller knows more about the product than the other party. We generally assume the seller has more accurate information than the buyer because it makes intuitive sense that the owner of a product would know more about it than a prospective customer. This idea about an imbalance of information was first proposed by George Akerlof in 1970 in his famous paper The Market for “Lemons”: Quality Uncertainty and the Market Mechanism, in which he cites the used-car market as prime example of this phenomenon. He finds that:

There are many markets in which buyers use some market statistic to judge the quality of prospective purchases. In this case there is incentive for sellers to market poor quality merchandise, since the returns for good quality accrue mainly to the entire group whose statistic is affected rather than to the individual seller… As a result there tends to be a reduction in the average quality of goods and also in the size of the market.”

The resulting reduction in the average quality of goods and the size of the market implies there is a role for government intervention to increase total welfare. If through government regulation the information known to buyers and sellers becomes more symmetric, then the adverse effects previously mentioned will be reduced.

This is exactly the problem that has been growing in the finance industry over the years. When Warren says that lengthy credit card contracts with footnotes and legalese are bad for consumers, she really means that they increase the gap between what the sellers (banks) and the buyers (consumers) know about the product. As shown by Akerlof’s work, this asymmetry of information hurts both buyers and sellers in the end because people will be more hesitant to engage in exchange. In other words, if you want to sell me that collateralized debt obligation so badly, then why would I want to buy it? Something must be wrong with it, right? And when people don’t have the education or expertise to properly evaluate financial products, they will either make poor decisions or no decisions at all (i.e., abstain from exchange). A new regulatory regime of common-sense rules for banks and investment firms, as outlined by Warren, would decrease information asymmetry, thus strengthening the market and improving consumer welfare.

Now we just have to wait and hope that Warren gets a seat on the Senate Banking Committee so she can affect this change.

 

Maslow’s Hierarchy of Needs and Economic Growth

Note: This post was motivated by the comedian Rob Delaney’s twitter comment that Obamacare will help satisfy the base of Maslow’s hierarchy of needs, which will lead to a stronger economy. This idea has an intuitive appeal, so I decided to flesh out the argument in a longer format than twitter allows.

Maslow’s hierarchy of needs, first proposed by Abraham Maslow in his 1943 paper “A Theory of Human Motivation,” is often illustrated as a pyramid like the one above. The idea is that human beings will not be able to focus on satisfying their higher level needs, such as creativity and respect, until they have satisfied their most basic needs, like food and sleep. Once they have taken care of their physiological needs, they can move on to worrying about their safety needs; once those have been met, they move on to their love/belonging needs, and so on.

Economists have already shown that economic growth can help people reach higher levels on the pyramid (The Wealth of Nations Revisited: Income and Quality of Life, 1995). This makes sense because as national income grows, people can afford to buy more food, water, housing, and other basic necessities. Rich nations can afford to build sanitation systems and basic infrastructure that people depend on to pursue their life goals. Clearly, causation runs one way, but it does it go the other way too? In other words, if the government were to design its domestic policy to meet all the physiological and safety needs of its citizens, then would its labor force be more productive?

A cursory look at the world GDP per capita rankings, as determined by the World Bank, reveals much about what types of policies might lead to higher productivity (GDP per capita is a measure of productivity). In the top ten, there are five countries that can aptly be described as social democracies, i.e. countries whose governments provide generous, universally-accessible public services, including education, health care, child care, and workers’ compensation. These five countries are Norway, Australia, Denmark, Sweden, and Canada. The other five are very small countries that depend on a single industry for outsized incomes. There are the petrostates, Kuwait and Qatar; the financial centers and international tax havens, Switzerland and Luxembourg; and the East Asia gambling mecca, Macao. If the United States, currently ranked 14th, wants to improve its workers’ productivity, it should look to the five large countries, rather than anomalous small countries, for guidance.

Note that all I have shown so far is that there is a correlation between GDP per capita and the level of public services provided by the state. Some people might even argue that a few of the social democracies would be more appropriately placed in the “anomalies” category. There is an argument to be made that Australia’s wealth is derived from the combination of its abundant natural resources and China’s insatiable thirst for raw materials. But then again, maybe not. The Scandinavian countries have their fair share of natural resources as well, not least among them oil, so maybe that’s how they manage to fund a generous welfare state while maintaining a high GDP per capita.

To make the case for causation, we need to examine one of the basic principles of economics: risk aversion. Economists observe that in the presence of uncertainty, people are usually risk averse, meaning given the choice between investing in a risky investment with a high rate of return and a less risky investment with a relatively lower rate of return, they will choose the less risky option. This risk aversion is compounded by the relatively new discovery from behavioral economics that people have a loss aversion bias, which means they care more about preventing losses than acquiring gains. For example, one study showed that if you want to motivate students to perform well on tests, you should give them the reward before the test and then threaten to take it away if they fail to achieve a certain score. The study found that this incentive is more powerful than telling the students they will be given the same reward after the test is completed if they meet the threshold.

So people inherently don’t like taking risks and they don’t want to lose what they already have. This presents a clear and present danger to economic growth, as most economists believe entrepreneurship, also known as risk taking, is an important driver of innovation and increases in productivity. To help illustrate this theory, let’s consider the hypothetical case of Joe the Plumber. Let’s assume Joe works 40 hours a week at a mid-sized plumbing company in Cleveland, Ohio. Joe currently makes enough money to feed his family of four, maintain their health insurance coverage, and save for his kids’ college education. Now suppose that Joe wants to start his own plumbing company. To do this he will have to take business classes at a community college, take out a loan from the bank, and decide which tools, office space, technology, and transportation to invest in, just to name a few. Joe will also have to devote a lot of time to working on the new business, so he will need to spend more money on child care services. If this business were to fail, it would be a serious financial hardship for Joe’s family and he would have to discontinue the family’s health insurance and stop adding to his children’s college funds. Not wanting to lose what he already has by taking unnecessary risks, Joe forgoes the opportunity to start his own business. The economy stagnates.

But if Joe were to start his business in say, Norway, he wouldn’t have to worry about many of these downside risks. His whole family would be entitled to health care, education, child care, and other public services, no matter the fate of his new business. What does Joe do then? He takes the plunge and signs up for business classes, knowing that failure won’t irrevocably harm his family. The economy now has one more entrepreneur hoping to strike it rich. If he succeeds, the economy will grow larger than it otherwise would have been able to.

Now many would argue that the possibility of grave personal financial hardship is an important motivator for a businessman to succeed. This may be true on some level, but let’s not exclude the other reasons people start businesses. Being the owner of your own business merits a certain level of respect in the community and gives you a sense of achievement that is difficult to find elsewhere. Furthermore, a successful business would undoubtedly boost its owner’s confidence and self-esteem. There are myriad reasons to work toward making a business successful, aside from avoiding personal financial catastrophe.

This simplification is not meant to be a dispositive example of the effects of social democracies on economic growth in all cases, or that the total economic benefits of a large welfare system outweigh the total economic costs. I merely wanted to show that by securing the lowest levels of Maslow’s hierarchy of needs, it is possible there will be more, not less, economic growth than without doing so. Indeed, there seems to be much evidence to recommend this framework for domestic policy and proponents of a stronger welfare state would stand to benefit from adopting it.

A Twist on the Misery Index

The Misery Index, which is simply the sum of the inflation rate and the unemployment rate, was first created by economist Arthur Okun in the 1960’s while he was an adviser to President Lyndon Johnson. The index is intended to measure the economic well being of people in a country during a given period of time.

When I first came across the Misery Index, I was intrigued by the idea, but thought that it could use some tweaking to more accurately measure how poorly an economy was performing. After doing some research, I think I’ve come up with a superior method of measurement. As always, the hope is that if we provide our policymakers with better data about the state of the US economy, they will be able to make better decisions going forward. In that vein, here is what the Stapp Misery Index looks like:

Instead of just summing the inflation rate and the U3 unemployment rate, like Arthur Okun, I summed the U6 unemployment rate and the inflation rate, and then subtracted the percent change in real GDP from the previous year. I plan on doing a full length post on the differences between the different measurements of unemployment in the future, but a quick description here will suffice. In essence, U6 is just a broader measurement of unemployment than U3. U3 is the percentage of the labor force that is unemployed and has looked for work in the past 4 weeks. U6 is equal to U3 plus discouraged workers (those who want a job but aren’t currently looking because there aren’t many available), marginally attached workers (those who would like a job, aren’t currently looking, but have looked within the past 12 months), and part-time workers (those who are working part-time because they can’t find a full-time job).

I decided to use the broader measure of unemployment because recent research has shown that unemployment has a bigger effect on people’s happiness than the inflation rate. Rafeil Di Tella, Robert J. MacCulloch, and Andrew J. Oswald published a paper in 2001 called Preferences over Inflation and Unemployment: Evidence from Surveys of Happiness, in which they estimated “people would trade off a 1-percentage-point increase in the unemployment rate for a 1.7-percentage-point increase in the inflation rate.”

Lastly, I decided to subtract out the percent change in real GDP because it is the most commonly cited indicator of economic growth. Consequently, a positive value for real GDP should lower, not raise, the Misery Index. The main conclusion I would draw from looking at this index is that we are still very far from reaching our pre-recession trough of roughly 8.5. The current value of about 14 implies that economic policymakers should keep the pedal to the metal on both fiscal and monetary stimulus. There is still too much misery out there to pull back now.

P.S. I plan on revisiting this index again at a later date. I think it still could be improved and I’m not done tweaking it yet. One additional change I am considering would be to measure inflation as the deviation from 2%. Most economists agree that in the long-run a 2% rate of inflation is optimal and therefore any deviation from that rate would add to economic misery. On a technical note, I would add the absolute value of the deviation to the index because deflation is at least as grave as concern as excessive inflation when it comes to impacting economic well being.

How to Cure Baumol’s Cost Disease: Turn Services into Goods

Baumol’s Cost Disease is a phenomenon that was formally observed by William J. Baumol and William G. Bowen (history snubs poor Mr. Bowen) in their 1965 paper, On the Performing Arts: The Anatomy of their Economic Problems. Baumol and Bowen point out that it takes the same number of musicians the same amount of time to perform a Mozart string quartet today as it did when Mozart was alive. This implies that the productivity of classical musicians, or how much output (music) they produce per hour of labor, has remained constant over time. Meanwhile, productivity in sectors that employ more technology, like manufacturing, have seen productivity rise significantly over the years. This is where Baumol and Bowen’s work gets really interesting.

In addition to their observation that productivity has not increased in some sectors of the economy, like the robust Mozart string quartet industry, they also noted that real wages, meaning wages adjusted for inflation, have still increased in these sectors over time. This second observation is in direct contradiction with one of the basic tenets of classical economics: wage equals the marginal productivity of labor. In other words, at the margin, employers pay their employees based on how much they produce. But if the productivity of musicians hasn’t gone up, then why have their wages? The answer, say Baumol and Bowen, is that wages rise in stagnant sectors of the economy because producers must compete for labor by paying a competitive wage rate. If the real wage of a classical musician were the same as it was in Mozart’s time, no one would choose to be a classical musician. The opportunity costs, in this case the wages of other available jobs, would simply be too high.

This phenomenon is known as a “cost disease” because it has dire implications for the economy at large. As real wages increase in sectors of the economy that do not experience productivity increases, a larger share of the economy will be devoted to producing these services. And since the government disproportionately supplies many of them – e.g. health care, education, law enforcement – Baumol’s cost disease also predicts the government’s share of the economy will rise as well.

The string quartet example can be loosely extrapolated to the whole service sector of the economy because the defining characteristics of services describe a string quartet as well. There are generally considered to be five such characteristics: intangibility (you can’t hold it in your hand or store it in a warehouse), perishability (they cannot be reused once delivered), inseparability (they are dependent on the person providing the service), simultaneity (they are delivered and consumed at the same time),  and variability (every service is unique and cannot be replicated exactly). For the purposes of this essay, you can think of these five characteristics as barriers to increasing productivity. Before we talk about overcoming these barriers, let’s contrast services with goods. Goods are tangible, relatively nonperishable, separable from the provider, and consumable after delivery. For these reasons, the provider of a good is less critical to how and when it is consumed. Furthermore, goods can be mass-produced in factories, which are amenable to technological advancements in production processes. Services tend to resist technological innovation because the person providing a service must be present while it is consumed and he is the main determinant of its quality. Much of what we call technological innovation involves removing the human from the production process, e.g. robots on an assembly line. Since most service work is non-routine, it is difficult to take out the person delivering the service.

If services are resistant to increases in productivity and goods are not, I argue that we can cure Baumol’s cost disease by transforming services into goods. I will use the education industry as a prima facie case for how to achieve this type of transformation. There is a revolution going on right now in higher ed regarding online education. Companies like Coursera, MRUniversity, and Udacity have created online platforms for delivering college level classes. Their approach differs from other experiments in online education, like MIT OpenCourseWare and Edx, in that these companies are not merely videotaping professors giving their normal classroom lectures. They are creating classes from the ground up that are optimized for the online experience. For example, many of the classes emulate the format used by Khan Academy, which combines notes, pictures, and diagrams with a disembodied voice narrating the lesson. They claim this allows students to focus on the material and follow along more comfortably. In any event, the key technology all of these online platforms exploit is the same: video.

Though video technology has been with us since 1951, education innovators are just now using it to effectively disrupt higher ed. It is the perfect technology for this task because it flips the five characteristics of education that make it a service. Video enables lectures to be “reused” after they have already been delivered (perishability); it allows them to be delivered without the provider present (separability); it permits viewing to occur after a lecture is delivered (simultaneity); and it makes the service homogenous (variability). Once the college class has finished the transformation from a service to a good, it can be mass-produced (at a marginal cost of zero) and sold to the public. According to the statistics on the Khan Academy website, Sal Khan, the company’s founder, has delivered more than 200,000,000 lessons to students across the globe. Talk about productivity.

P.S. I realize that online video does not directly address tangibility. Though this characteristic is irrelevant to increasing productivity in higher ed, if it is an important feature to you, I suggest you download the videos, burn them to a DVD, and clutch it tightly.

The Future of Banking Will Come from an Unlikely Place: Africa

New York City is widely considered to be the financial services capital of the world. It is home to many of the world’s largest banks and the world’s largest stock exchange, NYSE Euronext. Therefore it would be natural to assume that if you wanted to know what the future of banking looks like, you would look to the Big Apple for some insight. If you wanted to learn even more about the latest innovations, you might look to London, Hong Kong, or Tokyo. But what if I told you the future of banking is not in any of those mega financial centers, but in Africa? This conclusion may sound counterintuitive at first, but it rings true once you consider one of the newest ideas in financial services: branchless banking.

Branchless banking is a way of delivering financial services without the traditional brick and mortar buildings. Methods of distribution include mobile phones, the internet, point of sale (POS) devices, ATM’s, and retail outlets. Africa has become a leader in this new distribution channel not because it had some unique foresight, but out of necessity. African countries lack the infrastructure to support the banking systems of the developed world and, more importantly, their customer base lacks the amount of capital necessary to make such a system profitable. The purchase of land, the construction of buildings, and the hiring of staff are all capital intensive. Once those investments have been made, banks need to have large operating margins to cover their costs.

Since such margins are a pipe-dream in Africa, this is where branchless banking comes in. Whether it’s making payments to other people or making a deposit or withdrawal, many of the most common financial services delivered inside a bank branch can be done outside of it via mobile phones and ATM’s. M-Pesa, the provider of the largest mobile phone payment system in the world, has 15 million accounts, as of early 2012. Smartphones can deliver even more services, like depositing checks, than mobile phones without internet access. More involved financial services, such as loans and investment accounts, can be delivered at third-party outlets, like post offices and grocery stores. This movement away from bank branches and toward third-party outlets will be similar to the switch from Blockbuster to Redbox. That didn’t turn out too well for Blockbuster. These nontraditional methods of delivery reduce transaction costs, which makes small transactions profitable and encourages the unbanked, those who don’t use financial services, to join the financial system.

This type of banking may be perfect for low-income markets like those found in Africa, but why would we import this idea to the United States, where many people have a high net worth and are satisfied with the current way of doing business? The answer is simple: gigantic costs savings. Let’s look at the data for Bank of America. Currently, Bank of America has 5,600 hundred branches and 290,000 employees. Let’s conservatively estimate that eliminating all branches would reduce the number of employees to 100,000 (I argue this is a conservative estimate because most staff positions are not at a bank’s headquarters, but at its branches). All the money that was spent on building, maintaining, and staffing bank branches could be used to pay higher interest rates on deposits, charge lower interest rates on loans, or develop new digital delivery technologies.

Realistically, Bank of America, or any of the big four banks for that matter, will not be able to close all of its branches in the near future. It has building leases, employment contracts, and service expectations that would prevent this from occurring at a sufficiently rapid rate. That’s why I think the branchless banking revolution in the United States will come through startups like Simple. Instead of worrying about scaling down costs, startups can focus their energies on scaling up revenue. To avoid the concomitant costs and hassles of becoming a full-fledged bank, these startups often partner with established banks. For example, Simple partners with the Bancorp Bank to ensure its clients funds are FDIC-insured. This allows Simple to focus on developing the alternative delivery methods that branchless banking depends on.

This branchless banking revolution is bound to occur sooner or later because the cost savings are so enormous. We just have to wait for it to get here from Africa first.

Monopolies, Human Needs, and Hobson’s Choice: A New Approach to Antitrust Regulation

“There is a widespread conviction in the minds of the American people that the great corporations known as trusts are in certain of their features and tendencies hurtful to the general welfare. This springs from no spirit of envy or uncharitableness, nor lack of pride in the great industrial achievements that have placed this country at the head of the nations struggling for commercial supremacy… It is based upon sincere conviction that combination and concentration should be, not prohibited, but supervised and within reasonable limits controlled; and in my judgment this conviction is right.” – President Theodore Roosevelt, State of the Union 1901

In 1898, President William McKinley appointed the Industrial Commission to investigate industrial concentration and railroad pricing and make policy recommendations to the President and Congress. After McKinley’s assassination in 1901, Theodore Roosevelt seized upon the Commission’s recommendations to impose stricter regulations on trusts via the Sherman Antitrust Act of 1890, which banned anticompetitive practices by businesses. John D. Rockefeller pioneered the use of trusts, arrangements where one person holds the title of property for the benefit of another person, to further consolidate his control of Standard Oil, then the largest corporation in the world. Due to Rockefeller’s successful campaign to dominate the US oil industry, trusts have become more or less synonymous with monopolies.

The excerpt in the introduction is from Roosevelt’s first State of the Union address in 1901, in which he expresses the national displeasure with the rise of trusts. Standard economics tells us that reductions in competition between firms lead to less choice for consumers and, generally, less utility. But when does this decrease in welfare merit government intervention? Even while advocating an increase in the regulation of monopolies, Roosevelt was deferential in his rhetoric toward private business (“nor lack of pride in the great industrial achievements”). This progressive viewpoint, which at once lauds business for increasing human welfare and criticizes it for its anticompetitive tendencies, demands a balanced approach to antitrust regulation.

Since the Federal Trade Commission (FTC) was formed in 1914 under President Woodrow Wilson, it has been the foremost government agency responsible for preventing and eliminating anticompetitive business practices, such as price fixing (collusion among businesses to raise prices), limit pricing (threatening to lower prices significantly if new firms enter a market), and predatory pricing (selling goods at a loss to force competitors out of a market). Aside from preventing anticompetitive practices, the FTC is tasked with determining the point at which there is too little competition in a given market. This task is a difficult one because defining a market is highly subjective. For example, what market is Sirius XM Radio Inc. in? If you say it is in the satellite radio market, then Sirius is clearly a monopoly, because no other company in the US offers satellite radio service. But if you expand the definition of the market to include AM/FM radio, then there is competition aplenty (it’s hard to compete with free). No need to stop there, though. The market could be further expanded to include all music delivery services, like iTunes and CD’s. But isn’t satellite radio just another type of entertainment? If you say that Sirius is a competitor in the Entertainment & Media market, then we are talking about a $1.1 trillion dollar industry. As you can see, changing the market from satellite radio to Entertainment & Media is equivalent to teleporting a fish from a small pond to the Pacific Ocean.

The subjective nature of defining markets is one of the contributing factors to why antitrust cases take millions of dollars and many years to prosecute. In lieu of never ending arguments between high-priced lawyers about what exactly a given market is, I propose a new, more limited rationale for pursuing antitrust cases. The FTC should dedicate its focus to the utilities industry. Companies in that sector of the economy are satisfying the most basic of human needs, by providing services such as water, sanitation, and electricity. In all other markets, it is impossible for a monopoly to exist, in the sense that there is only one available choice. There is always Hobson’s choice, the refusal to buy a good or service. You can think of this as the “take it or leave it” option. But when it comes to fundamental human needs, you cannot “leave it.” You must consume it at any price (though the quantity demanded may decrease to subsistence levels).

Now, these types of services tend toward monopoly because of the enormous capital outlays required to build the infrastructure to deliver them. Water and sewage companies must lay pipelines to every house they service and power companies must build electrical grids that cover entire cities. Once this infrastructure is in place, it would be highly inefficient, and likely unprofitable, for another company to build its own infrastructure. It is important to note that normally food is included in a list of human needs, but I think it’s self evident that the food industry has adequate competition and lacks this tendency toward monopoly because it is transported over international waters and public roads.

Since many utilities are already publicly owned or highly regulated, this argument is an implicit endorsement of fewer antitrust lawsuits. Though monopolies generally harm consumers in the short-run by limiting consumer choice, the long run competitive environment of a market is too uncertain to warrant trust busting, excluding the markets that address human needs, of course. A brief review of the most notable antitrust lawsuit of the last twenty years, United States v. Microsoft, bears this out. The FTC launched an inquiry in 1991 and the case was finally settled upon appeal in 2001. In the settlement Microsoft agreed to allow non-Microsoft software, like the internet browser Netscape Navigator, to run on Windows, but it was not forced to break up into separate entities. After 10 years and millions of dollars, there was no trust busting to be had. Looking back on this case, after the rise of Apple, smartphones, and tablet computers, there is serious doubt that Microsoft would have been able to maintain its chokehold on the personal computing market absent the DOJ lawsuit. No one could have predicted these market trends in 1991, but that fact is a reason not to break up companies, rather than one for it.

If as a society we want more competition and fewer monopolies, we need to increase our scrutiny of the legal monopolies (also  known as de jure monopolies) currently operating in the United States. Most legal monopolies spring from patents, or exclusive rights granted to inventors to produce and sell their inventions for a limited period of time. Richard Posner ably lays out the argument against granting too many patents for long periods of time in his Atlantic Monthly piece. Lastly, the newly-minted Tabarrok Curve describes the relationship between patent strength and the level of innovation in an economy. The curve is based more on intuition than concrete theory, but it serves as a good jumping off point for discussing the optimal level of patent protection for society. Like Tabarrok, I think we are clearly to the right of the curve’s peak and we should hope the next time congress changes the patent laws, it will favor public welfare over corporate welfare. With tepid growth and a stubbornly high unemployment rate, that day cannot come soon enough.

We Shouldn’t Let Facebook Control the Drivers’ Licenses of the Internet

Now that the internet has become such an integral part of our lives, it is only natural for the government to reevaluate its relationship with this special technology. Most of the time when public discussion broaches this topic, it is only to demand that government bureaucrats leave the internet as it is. Recently, there was an outcry over SOPA and PIPA, two bills that were being considered by congress to combat online piracy. Critics of the bills said they went too far by forbidding search engines from linking to websites suspected to have pirated material and by requiring Internet Service Providers to block access to entire domains even if only one webpage within that domain contained pirated material. This crude approach was denounced by many popular websites and the bills have been postponed indefinitely.

Another common flashpoint for debate is net neutrality. Essentially, net neutrality refers to the current government regulation that Internet Service Providers (ISP’s) may not block or restrict your access to any website or network. This regulation is meant to prevent ISP’s from favoring (possibly for a fee) certain content or services over others. Ironically, in this case most internet users favor maintaining the state of government regulation, rather than reducing it.

But there is one aspect of the internet that would be improved dramatically by new government intervention: Internet ID. Internet ID can be thought of as analagous to your driver’s license. It refers to any system that certifies who you are to the people you are interacting with online. An effective system could enhance trust and facilitate exchange in the growing online economy. Furthermore, scammers and other criminals would have more trouble exploiting people on the internet. In an environment where everyone knows your name, reputations become important and behavior should improve accordingly. To be clear, I am not advocating for the end of anonymity everywhere on the internet. That certainly has its place as a safe guard for free speech, but the time has come to give internet users the choice to externally certify who they are when they’re online.

Most of us already have a form of an Internet ID: a Facebook profile. Not only does your profile signal to other Facebook users who you are, but many other websites are beginning to use Facebook accounts to verify identities as well. This should concern many people because Facebook is a private company with private interests (namely, maximize profits). Even though Mark Zuckerberg claims that “we don’t build services to make money; we make money to build better services,” color me skeptical. Mark’s intentions may be pure, but as a society we cannot leave such an important service up to the vagaries of a single man. Imagine if one company had a monopoly on drivers’ licenses. Do you really think they would be as cheap as they are today? My intuition tells me that the profit maximizing price is somewhere higher on the y-axis than it is now. But if Internet ID is such a desirable service to provide, then why aren’t competitors able to compete with Facebook?

Network effects. Network effects occur when the value of a service to a consumer depends on the number of other people using the service. As the number of people increases, the value of the service increases (e.g. telephones). Once a network gets going, there is a virtuous feedback loop that makes growth accelerate exponentially. After a network becomes established, people have very little incentive to switch to a new network and therefore no new firms enter the market. To date, Facebook and Twitter have harnessed the power of network effects most successfully, but there is a key difference between the two sites. While Facebook uses real ID, meaning your profile name must be your real name, Twitter allows users to register anonymously. This difference is the reason Facebook is the leader in Internet ID, and Twitter isn’t.

Though government intervention in the marketplace is rarely optimal or efficient, the presence of network effects necessitates the government to step in. While regulators could invoke the Sherman Antitrust Act and move to bust Facebook’s monopoly, I am merely suggesting the government creates a program that allows people to verify who they are online. This idea is already in the works, but progress has been disappointing so far. In April of 2011, the Commerce Department announced the development of a new cyber-identity system called The National Strategy for Trusted Identities in Cyberspace. That was over a year and a half ago. In that time, a Silicon Valley startup could have launched and failed a dozen times. The US government needs to prioritize this program and make it, or something like it, the cornerstone of its internet strategy (after all, people love the rest of the internet as it is). To accomplish this it needs to decrease the timeframes on its pilot projects and iterate quickly the way private firms do. Time is of the essence; Mark Zuckerberg is growing attached to his laminating machine.

Nonpartisan Blanket (Top-Two) Primaries: A Primer

Note: This post was motivated by Arizona Proposition 121, a ballot initiative that will change how Arizonans vote in primary elections. Though this blog’s main focus is economics, the branch of political science that studies elections, psephology, is a field that uses similar game theory and statistical methods to conduct research on electoral systems.

Before I dive into the analysis portion of this post, it is probably a good idea to explain what exactly a “nonpartisan blanket primary” is. Also known as a top-two primary or a qualifying primary, a nonpartisan blanket primary is way of selecting the top two candidates for the general election without using the standard Republican and Democratic closed primaries.

Let’s break down this terminology just to be as clear as possible. First, “nonpartisan” just means that the primary is not controlled by any political party and, importantly, the candidates standing for election have not necessarily been endorsed by any political party. Candidates are usually allowed to state a party preference, using around 16 characters, but that does not mean the specified party supports the candidate or that the candidate actually holds the views of that party.

This may sound like splitting hairs, but the difference between being endorsed by political parties and merely stating a party preference is the main reason the United States Supreme Court ruled nonpartisan blanket primaries to be constitutional in Washington State Grange v. Washington State Republican Party in 2008. Before this ruling, nonpartisan blanket primaries were considered unconstitutional because when candidates listed their party affiliation (n.b. this is distinct from stating a party preference), the political parties felt they were being forced to endorse candidates whom they might not actually support.

Next, “blanket” means that the primary consists of all the candidates running for a given elected office, regardless of political party. And finally, “primary” refers to the fact that this stage of the election process is used to winnow down the field before the general election. Generally, if no single candidate in the primary receives a majority of the votes, then the top-two vote getters will advance to a general election. It is also important to note that this type of primary is an “open” system, meaning that all registered voters are eligible to vote. This contrasts with closed primaries where only registered voters of a particular political party can vote. While not all Republican and Democratic primaries are closed, many of them are, which means independent voters are often excluded from a critical stage of the voting process.

Now that you understand what a nonpartisan blanket primary is (hopefully), I want to show you that, when structured properly, it is superior to our current primary system. First, what are the goals of a nonpartisan blanket primary? Proponents of this system contend that by including independent voters in the primary process, each party’s base will have less influence on who gets elected. They hope this more inclusive approach to elections will have a moderating effect on politicians and incentivize bipartisanship. According to DW-Nominate, a scaling method that uses congressional roll-call votes to measure polarization, the current congress is the most polarized it has been in decades. The video below shows how polarization has changed throughout the years.

In August of this year, the approval rating of Congress hit an all-time low of 10%, so it’s fair to assume that this level of polarization is not healthy for the institution in the long-run. With few examples to study, political scientists are still researching whether nonpartisan blanket primaries are effective in decreasing polarization. Since we lack hard data to back up the supporters of top-two primaries, let’s address the system’s critics. I dismiss out of hand the main criticism from incumbent politicians that the proposed change is an attack on political parties. That’s exactly what it’s supposed to be! Politicians view any change as a negative because they have been very successful playing by the rules as they are currently written.

There are, however, two genuine criticisms of nonpartisan blanket primaries. The first is that incumbents will be further entrenched because the “anti-incumbent” vote will be split among numerous challengers, meaning the incumbent will get a majority of the vote in the primary and will not have to compete in a runoff election (this is how Louisiana conducts its primaries). Fortunately, this one is easy to address: don’t do that. Even if a candidate gets a majority vote share, the top-two finishers in the primary should automatically advance to the general election, which is how Prop 121 is designed. Consider the case where an incumbent receives 51% of the vote in the primary and three challengers receive 17%, 16%, and 16% each. By not automatically ending the election because the incumbent got a majority share of the vote, the 17% challenger has 3 months to gain name recognition and create an anti-incumbent coalition. Come the general election, it is plausible that the outcome will be flipped on the incumbent congressman.

The second criticism is that nonpartisan blanket primaries incentivize political parties to only have two affiliated candidates contest a primary. Consider the case where two Republican-affiliated candidates each receive 20% of the vote and four Democratic-affiliated candidates each receive 15% of the vote. In this case, the two Republicans would advance to the general election and this liberal-leaning district would be robbed of a left-of-center choice. While there is an actual risk of this situation occurring, I think it would be a rare occurrence in an open system. When anyone can run in the primary and state a party preference, it will be difficult for party bosses to control the composition of the slate of candidates.

Lastly, there is a minor criticism that top-two primaries encourage strategic voting, i.e. members of political parties voting for weak candidates of the opposing party to increase their preferred candidate’s chances in the general election. Again, I think this criticism assumes far too much cooperation and coordination among thousands of people. The voting behavior of your fellow citizens is too unpredictable to incentivize that kind of voting strategy.

I hope this “primer” helped you understand how a nonpartisan blanket primary works, what its supporters hope it achieves, and why criticisms of the proposed changes are largely unfounded.

Why We Need to Prioritize Education Reform: Condorcet’s Jury Theorem

Education is said to have positive externalities. That means there are significant benefits to people who are external to the exchange between teacher and student. Even though teachers are the only ones who are paid to educate students and students gain human capital by attending school, conventional wisdom holds that society as a whole benefits from having a more educated citizenry. A quick look at the available data shows that people with more education are less likely to be unemployed and can expect to receive a higher income for their work. Higher incomes means higher tax revenue for the government and more disposable income for the people who earn them. Now, though these data merely show a correlation between education and employment benefits, there are many reasons to believe there is causation as well. Even if you don’t think high school and college actually increase the knowledge of students (there are skeptics), it is a fact that many recruiters focus their time and resources on college campuses and that lots of jobs require a certain degree to even apply (this is known as credentialism).

But these are just the most common arguments for why investing in education is worthwhile. If those haven’t fully succeeded in convincing taxpayers that their money is being well spent, what else can be said to persuade them? One of the basic assumptions of economics is that people are self-interested and will only do what is best for themselves. While this is clearly a gross simplification, a few hours of interaction with most people will tell you this is at least on target. So if you want to increase public investment in education, you must make a more convincing argument for its positive externalities, i.e. the value for everyone apart from the students and teachers. This is where Condorcet’s jury theorem comes in. In short, the theorem is concerned with determining the probability that a group comes to a correct decision under majority voting. If the probability (p) of an individual in the group voting for the correct decision is greater than 1/2, then adding people to the group will make it more likely the group will come to the correct decision. Conversely, if p < 1/2, then adding more people to the group will only decrease the likelihood of the group reaching the correct decision (in this case the optimal number of voters would be 1). Now, political scientists say that Condorcet’s jury theorem provides a theoretical basis for democracy, but they often overlook the importance of the assumption that p > 1/2. If it is not, the theorem dooms democracy rather than provides a basis for it.

The key here is that there is a tipping point. If p is less than 1/2, with a population of more than 300 million people, we are fated to choose suboptimal outcomes. But once passes the 1/2 threshold, the reverse begins to hold true: we are extremely likely to make the correct decision, i.e. choose good public policy. I believe that if we improve our education system, we can increase dramatically. When people have taken classes in Economics, Finance, English, Math, Science, etc., they are better prepared to challenge harmful policies and their proponents. More educated citizens will also be more well-equipped to analyze ballot propositions and the positions of candidates for public office. Education reformers disagree about how to best improve our education system, but if we begin to realize how important it is to the quality of our democracy, our policymakers will devote more time to answering these questions. Until then, we should pursue an “all-of-the-above” approach, preferably through randomized field trials. Just to name a few options, we could increase government funding (see chart below), improve school choice, and incorporate new technology into classrooms. Once we determine which of these approaches is the most cost effective, we should devote more resources in that direction.

Unfortunately we don’t seem to be doing very well on the “increasing funding” part. In fact, we are doing the opposite:

So the next time someone says we shouldn’t prioritize education, refer them to Condorcet’s jury theorem and ask them how we can afford not to.