Central Bank Methods for Managing Currency Valuation

In June of 2015 as Chinese stocks crashed, the Chinese central bank, the People’s Bank of China (PBOC), wrote to the United States Federal Reserve to ask for their advice in mitigating a stock market plunge. (Reuters) The specific advice related to how Greenspan dealt with Black Monday in 1987 including how to inject cash into the market and also provide reassuring messages to the market. China, however, was facing three challenges: it had to maintain a currency peg, support equities and target interest rates on the open market. Two months later the PBOC chose to drastically devalue their currency.

The United States and China differ in the policy options available to their Central Banks for two main reasons. First, the United States is restricted in how much it can devalue its own currency without causing global turmoil. Secondly, the United States Federal Reserve is limited in its trading ability by the Federal Reserve Act. Their actions, however, are similar to the PBOC but require different avenues to remain legal.

The Fed and PBOC are also alike in that they require their currency to be stable. The United States benefits from the Dollar being the primary reserve currency, a position which requires a stable currency at least relative to other major currencies. Likewise, China, who wishes the Renminbi to become a major reserve currency, cannot manipulate their currency openly. Therefore, both are constrained in this aspect. However, in a globalized world of free-floating exchange rates, many policy options remain available.

The President of the New York branch of the Federal Reserve, Bill Dudley, has noted that despite the vast capital provided to banks during the recession, including their large excess reserves, and the increasingly large balance sheet at the Fed, little inflation has been seen. In other words, despite a huge injection of cash into the market, inflation remains invisible. Short term this can be accomplished in two ways: pegs to other currencies which must be maintained by balance sheet purchasing or selling, and manipulation of the futures markets especially currency and commodity futures. The PBOC prefers the former method and the Fed prefers the latter method. (NYFED)

Currency pegs have the downside of being highly visible and expensive to maintain over the long run. China, for instance, could not maintain their peg to the Dollar; nor could Switzerland maintain its peg to the Euro. The Chinese originally started in 1994 with a floating peg to the Dollar. This peg was maintained until 2008 when the float became a managed float still tied to the Dollar. Finally, in 2015 the Chinese received reserve currency status from the IMF and removed any explicit pegs. Now, after a huge currency devaluation in August of 2015, trading data seems to show a peg to gold. Regardless, pegs seem to be used by the PBOC to maintain stability.

In the United States, the Federal Reserve has a dual mandate to minimize unemployment and keep inflation low. This mandate means that the Federal Reserve cannot base monetary policy on pegs. Even though a peg to gold was historically used by the Fed, it was abandoned in the 1970s precisely because of its inflexibility. Since then, the Federal Reserve has been more reactionary to economic indicators and focused on keeping an inflation peg at 2%. This floating currency management makes transparency and communication between the Fed and the market very important.

Both pegs to other currencies and pegs to inflation require constant active trading to maintain the peg. Thus, the PBOC and Federal Reserve have active trading bodies to maintain a balance sheet for the central bank. Though the balance sheets of the PBOC and Fed are not entirely known, the composition varies extensively. This difference is partially due to the different mandates, but also because of the sophistication and institutional knowledge at each bank. The Fed has been in the stability business much longer and has developed extensive internal trading methods and partnerships. We will focus on the different methods used by the Fed and PBOC for manipulating currency supply and currency valuation through the market.

The PBOC has traditionally managed its currency value by holding the largest foreign currency balance sheet of any central bank. (Bloomberg) This vast stockpile of foreign currency could be sold or bought to establish other currencies’ value relative to the Chinese currency. In addition, buying and selling had the dual purpose of affecting both the other currency and the money supply in China. However, the trading operations performed were rather transparent and often met resistance by hedge funds which used the trading signals to bet against the Chinese currency. To counter these speculators, which China blames for its most recent currency crisis, the Chinses government has at times halted currency trading by foreign banks and proposed taxing currency trades. (People’s Daily)

However, over the past decade the Chinese have also been building a more sophisticated trading arm. For instance, since 2009 the PBOC has inked 31 currency swap agreements that provide currency liquidity to other central banks through reverse repo sales. (Zerohedge) The reverse repo instruments are offered at an overnight rate much like the federal funds rate. By varying this rate offered to other central banks, the PBOC has a window into a currency market previously only offered by the Federal Reserve. (Zerohedge)

In contrast, the Federal Reserve has been developing currency liquidity agreements, swap agreements, and currency settlement agreements with countries and central banks ever since the Bretton-Woods currency regime broke down. For this reason, the Chinese knew where to go for advice when their currency became strained in July of 2015: the Federal Reserve. (Zerohedge) (Reuters) That said, the institutional knowledge at the PBOC is only beginning to catch up to that of the Federal Reserve. For instance, the Federal Reserve has avoided using direct currency interventions which must be reported to the treasury. (Bloomberg) Instead, the Federal Reserve prefers currency swaps, reverse repos, and short positions on commodities to effect its monetary policy beyond the nation’s borders. (CFR)

Thus, the central banks of China and the United States were much different only a decade ago, but are increasingly playing the same game with almost as sophisticated tools. An adoption of the Chinese system would give our Federal Reserve more power to intervene in equity markets during a recession. However, if one has a Schumpeterian view of the economy, support of weak firms through direct equity purchases may be counter-productive. Therefore, given the sophisticated tools available to the Federal Reserve which barely fall within the limitations of the law, more change would likely occur if/when the Chinese adopt the trading methods of the Federal Reserve.

References:
“Currency Intervention” https://www.newyorkfed.org/aboutthefed/fedpoint/fed44.html
Lang, Jason. “China central bank to Fed: A little help, please?” Reuters. 21 March 2016
People’s Daily “A declaration of war to the Chinese currency? ‘Ha ha’” 26 January 2016.

Driving Forces of the College Bubble

Since the last bubble pop in the housing market, the most discussed bubble has been the college bubble. Not only do economists not agree on whether there is a bubble, but they also don’t agree on what is driving the bubble? [1] Tuition rates and the number of graduating students has quadrupled in the past 30 years, far outpacing inflation and job growth. [2] Yet the opportunity cost of not going to college has never seemed higher. [3] After lost wages and cost of tuition is considered, the difference in earning potential has been calculated to be as high as half a million dollars. This differential is based on the pay gap between college graduates earning 80k and highschool graduates earning 43k. [3] For the most recent graduates, this number does not tell the whole story. In fact, for them the choice is not nearly as simple as the NY Times article “Is College Worth It?” suggests. This paper will address three major flaws with the much touted wage gap calculation and what it says about the underlying supply and demand for degrees.

First, all bubbles confer huge benefits to early entrants, while later entrants are faced with huge risks from entering the over-valued market. Nevertheless, as the bubble peaks, two stories are prevalent; one declaring a bubble exists and another warning of lost rewards for those who don’t buy in. This is the logic mirrored in the pay gap. College degree holders from the 1960s to 1980s were an elite group of around 10% of the population. [2] They came to hold the top positions at US companies, as the stories of mailroom to C-suite success declined. Today with 33% of the population holding a college degree, a degree alone no longer means a management position and elevated salary. These positions now call for a master’s degree. While the high salaries of the early entrants widen the pay gap, new entrants are greeted with a much different market [4]. How different is this market?

This market is characterized by 41.6 million college graduates and only 28.6 million jobs that require a college degree [2]. Stated another way, only 27% of jobs currently require an associates degree or higher but in the United States 47% of the work force has these qualifications. [5] The constant headlines about degree-holding baristas are driven by the fact that 48% of college graduates are working in jobs not requiring a college degree and 37% are working in jobs that only require a GED. [2] These graduates struggling to find a college-level job are primarily newer entrants who are finding that college alone is not a ticket to success in today’s market. [4] In this market, degrees are over supplied and over valued.

Second, the growth in degrees awarded each year is quickly outstripping the growth in demand. Over the past 10 years, from 2001 to 2011, the number of college degrees awarded increased 39% for bachelor degrees and 71% for associates degrees. [6] Over the same time period, the number of jobs requiring a bachelor’s degree or higher did not increase and the number of jobs requiring an associates degree or higher increased by only 5%. [7][8] Though more people are awarded degrees than ever before, the demand for college degree holders has not increased substantially. This is reflected in a 60 total increase over the past two decades of degree holders who are under employed. [9] As supply exceeds demand, wages should fall as well. Indeed, real wages for young graduates fell 5.4% between 2000 and 2011. [10] These metrics indicate that the worth of a college degree has been falling for the past decade even as record numbers of students graduate college – a clear sign of a bubble. Nevertheless, the wage disparity between college graduates and high school graduates continued to increase over the same time period. [11]

This brings us to our third point. The increasing wage gap is primarily a reflection of under-employed college graduates displacing high school level employees. Never before has the employment outlook for a high school graduate looked so bleak. The number of young (25-32 yro) high school graduates in poverty has tripled since 1979 from 7% to 22%. [11] Wages for the same group have declined by 11% in the past decade. [10] Likewise, the unemployment of this age group has nearly doubled between 2007 and 2013 from 6.3% to 10.6%. [11] In essence, the wage gap is not comparing career success so much as basic employability. Needless to say, the comparison between an employed college graduate at minimum wage and an unemployed high school graduate results in quite a stark wage gap. It seems that over the past decade demand for under employed college graduates has replaced demand for high school graduates.

This makes sense. An employer looking at two resumes for a minimum wage job would choose the one with a degree – all other things being equal. Likewise, an employer looking at resumes for a college level job will choose the one with a master’s degree over one with a bachelor’s degree. Indeed, masters degree and higher is the only segment of the population that has seen increased wages and more job openings. Jobs requiring these qualifications doubled in the past decade. [7][8] Essentially the masters degree has become the new bachelors. What we now have is an employer’s market where over-qualification is the norm.

Given these market dynamics, speculation abounds over when and how the college bubble will pop. It is unlikely that it ever will. Economically, rational students should make decisions at the margin. Therefore, so long as the growing price of college does not over-run the benefits obtained by the growing wage gap, students will continue to go to college more. However, one thing should be clear. College enrollment is no longer a choice between flipping burgers and a successful career; rather it may be a choice between flipping burgers and no job at all! Even more to the point, the wage gap is not an indicator of success.

From a policy stand point, the current administration continues to boast that a college degree is the ticket to success. In fact, efforts are being made to increase the percentage of degree holders in the population from 30% to 60% over the next 15 years. [2] The reasons vary from making us more competitive as a nation by bringing us in line with other OECD countries to returning the United States to its former intellectual and industrial dominance. [1] If the goal of increased college enrollment is for the benefit of society, then perhaps we don’t have a bubble. However, if the goal is simply to get any job and escape unemployment, then we are hugely over-invested in education.

References:

[1] Belkin, Douglas. “How to Get College Tuition Under Control” Wall Street Journal. 8 Oct. 2013. <http://online.wsj.com/news/articles/SB1000142412788732454900457906899283473 6138>

[2] Vedder et al. “Why are Recent College Graduates Underemployed?” Center for College Affordability and Productivity. (Jan. 2013)

[3] Leonhardt, David. “Is College Worth It? Clearly, New Data Say” NY Times. 27 May 2014.

[4] Abel et al. “Are Recent College Graduates Finding Good Jobs?” NY Federal Reserve. Current Issues in Economics and Finance. Vol. 20, No. 1: 2014.

[5] Carnevale et al. “Too Many College Grads? Or Too Few?” PBS News Hour. 21 February 2014.

[6] National Center for Education Statistics. “Undergraduate Degree Fields” The Condition of Education – Postsecondary Education. (April 2014)

[7] BLS “BLS Releases 2000-2010 Employment Projections” 3 December 2001

[8] BLS “Occupational employment projections to 2020” Monthly Labor Review (January 2012)

[9] Vedder et al. “From Wall Street to Wal-Mart” Center for College Affordability and Productivity. 16 December 2010

[10] Shierholz et al. “Labor Market for young graduates remains grim”. Economic Policy Institute. 3 April 2013.

[11] Bloomberg. “College Grads Taking Low-Wage Jobs Displace Less Educated” 12 March 2014.

When Goals Limit Solutions

One of the simplest concepts in economics is the rational person trying to achieve a single goal. A consumer trying to buy an ice cream, for example. Most people have goals and most are not as simple as buying ice cream. Still the nature of the goal can have important consequences in how you obtain it. For instance, if you need ice cream but have no money, then robbing an ice cream truck is one of your only alternatives. Having just two dollars expands the number of choices and routes to your goal immensely.

This analogy has a corollary in public choice. Activists often proclaim the need to end practices still prevalent in society. In a recent article on domestic violence in the Guardian, the author concluded: “We need to end domestic violence entirely”. Similar positions have been voiced by law enforcement advocates calling for the end of crime, or environmentalists calling for the end of pollution, or anti-war advocates calling for world peace. Each position is an absolute one; That a negative aspect of society must end.

Two of these positions have been tested. Strong support for harsh criminal penalties in the 1980s and 1990s brought a shift towards mandatory minimums, three-strikes rules, and death penalties. Yet after 20 years, the data and policy studies are not sure what this harsh penalty absolutism has gotten us besides lots of prisoners. (see David and Goliath by Malcolm Gladwell) Ultimately, Gladwell arrives at the economic conclusion that eliminating crime or even reducing it too much can have too high a cost. Essentially, eliminating crime from society suffers from a marginal benefit problem as more criminals are locked up.

Likewise, after numerous studies and suggestions on how governments can reduce pollution and greenhouse gases, little progress has been made. The best solutions are carbon taxes or a carbon market which would allow the trading of “carbon credits”. Neither solution is commensurate with the extremes of the two opposing voices: industry and environmentalists. As a result, even the structure by which future progress might be made is left languishing in sub-committees. The problem with absolutism is that, even if technically possible, the all-or-nothing demands restrict the routes through which progress can be made.

World peace was the choice cause of the 70s. Now it’s a pejorative for all similarly idealistic and unlikely wishes. No one blinks an eye when Obama suggests re-entering Iraq or Putin sends troops into Ukraine. This is the folly of absolutism, idealism is likely to turn to realism and then to satire and then disregard. For progress to be made by a movement, progress must be made while it is still alive. The fault, however, lies not simply in idealism meeting realism but in limiting the paths to success.

By declaring an absolute goal, a lot of possible allies are alienated. Calling for the end of pollution renders pollution-reducing science useless, opposes industry and it’s employees, and makes the jet travelling activists easy targets for scorn. In reality, the scientists working for industry will be the first allies needed to reduce pollution once legislation is enacted; the industry employees would benefit just as much from the lack of pollution literally in their backyards and work places; the industry itself could benefit if efficient solutions saved money as well as reduced pollution; and finally, activists wouldn’t look like hypocritical idealists.

The desired goal, then, in these cases limits the solutions as well as eliminates possible allies. The goal creates the structure of the solution set, and in the absolutist case, a much smaller solution set. Few people, especially moderates, are likely to find these solutions acceptable. The results benefit only a minority. Most cases where the opponents are equally matched (industry vs. environmentalists) nothing happens. On the other hand, where the sides are unmatched (criminals vs. enraged society), extreme positions actually get implemented and hurt society as a whole (as happened in California). Better solutions abound but are not explored because moderates are not the ones calling for change. For the benefit of society, we must end absolutism [wink] and start discussing other routes to the valid goals espoused by these movements.

The Economic Basis for Limited Patentability of Life

In 1999, a farmer named Vernon Hugh Bowman went down to the local grain elevator and purchased soybeans headed for processing. Bowman already had plenty of soybeans back on his farm. In fact, many of the soybeans in that grain elevator may have been his. However, Bowman couldn’t replant the seeds from his harvest or even a fraction of them. His seeds contained patented genes developed by Monsonto, an agriculture supply company. Furthermore, as a condition for receiving the seeds, he had to sign a contract not to plant their offspring the next season.

And thus Bowman found himself buying back seeds from the grain elevator. These seeds would be contract and patent free, he reasoned, due to the first sale doctrine of patent law. Now he may be able to replant seeds every season from the season before, as farmers had done since the dawn of time. In addition, he told Monsanto what he was doing. Perhaps to spur on the lawsuit that followed. In the end, Bowman v. Monsanto made its way to the Supreme Court where in 2013 a unanimous finding ruled that the first sale doctrine didn’t apply to reproduced seed. This was little surprising given the precedent and reasoning from previous cases such as Asgrow Seed Co. v. Winterboer (1995), which showed that after two planting seasons a patent on plant genes or a plant variety would be worthless if the produced seed was not also the property of the patent owner.

Nevertheless, this bold, albeit illegal, assertion of farmer’s rights by Vernon Bowman was only a single battle in what has been a century long struggle between seed developers and farmers. It began in the early 1900s when plant variety developers wanted to be able to control the products of their labor once the plants were sold. This prompted proposals that plants should be patentable just as inventions were patentable. These were rejected because plants weren’t able to be described with the necessary accuracy that patent law requires (i.e. the written description requirement of 35 USC 112(a)). Furthermore, plants, especially plant varieties derived from natural breeding and selection, seemed to be products of nature, and products of nature were already excluded from patent law by court precedent (SAF, 2002).

Finally, in 1930 Congress decided that, while they may not be able to decide on the how all plant patents should be handled, one type of plant in particular lent itself to patenting. These were asexually reproduced plants. After all a piece of a plant cut off and replanted is essentially the same plant. Ownership likewise wouldn’t be hard to trace or follow. As a result the Plant Patent Act of 1930 (PPA) was passed, which gave patentablility to asexually reproduced plant varieties (with the exception of tuber reproduced plants, like potatoes) and relaxed the written description requirement of those patent applications as well.

Plant patents, however, were by no means utility patents. Instead they encompassed a separate and narrower spectrum of rights. Since patented plants at that time could not be described in terms of their genome, the appearance and traits of the plants, specifically the beneficial ones, were patented. Therefore, any alteration of those traits released the farmer or subsequent breeder from infringement. Likewise sexually reproducing the plant, that is planting or selling harvested seed, was not infringement since at the time it was thought that varieties could not reproduce true-to-type. Though many of these limitations on plant patents resulted from their inability to be fully described and misconceptions over the purity of reproduced seed, the rights of farmers to replant seed went largely unaffected. Indeed it was often cheaper to purchase asexually reproduced seedlings from commercial breeders than for the farmers to manage the duplication from clippings themselves.

Over the next 40 years, however, things changed. First it became abundantly clear that sexually reproduced plant varieties could reproduce true-to-type. Furthermore, a new type of plant development needed protection: hybrids. Commercial growers once again needed an extension of patent law. Now to be clear the PPA explicitly covered hybrids themselves if they could be asexually reproduced. However, the PPA did not cover those hybrid’s parents (the inbred lines) which were the essential intellectual property of the hybrid. Therefore, after much discussion and hearings with seed growers, developers and farmers, the Congress initiated the Plant Variety Protection Act (PVPA) which gave authority to the USDA to issue plant variety protection certificates (PVPC). The PVPA covered sexually-reproduced plants except for fungi, bacteria, tuber-propagated or uncultivated plants, and first generation hybrids (already covered by PPA). Furthermore, in addition to the specific plant exceptions, the law offered exceptions to reproduce proprietary plants for research purposes and for replanting by the farmer or sale by the farmer (so-called brown bag sales). While the plant variety development industry fought the farmer exception in court, stating that it undermined their entire business model, the USDA nevertheless received a LARGE number of PVPA applications, thus showing the economic benefit of the new law.

Then in 1980, Chakrabarty, a developer of micro-organisms for General Electric, attempted to patent a new bacteria which would breakdown and feed on crude oil, helping clean up environmentally disastrous spills. The US Patent office rejected the application for the patenting of a Pseudomonas bacteria with this capability as being unpatentable under 35 USC 101. The cited statute limits utility patents to useful processes, machines, compositions of matter and manufactured articles. The USPTO then defended this rejection to the Supreme Court where the case became Diamond v. Chakrabarty. At issue was whether Congress had anticipated that 35 USC 101 might encompass living things or whether the two previous plant patent act, the PPA and PVPA, were intended by Congress to be the limits and sole means of intellectual property protection for life.

The Supreme Court ruled in a close 5-4 decision that Congress had always intended broad interpretation of 35 USC 101 and that overlapping coverage between PVPA/PPA and utility patents did not imply exclusion of patents on living things from 35 USC 101. Furthermore, the Court quoted Congress in a legislative discussion of amendments to 35 USC 101 as saying patentable subject matter should “include anything under the sun that is made by man”. Since the bacterium was clearly a creation of man, the Court held that it was patentable.

The patent office’s argument was that the previous acts (PPA and PVPA) had been created to allow patenting of a limited scope of living things and wouldn’t have been necessary if plants were already covered by utility patents. The Court argued that the Plant Patent Act had been created to relax a written description requirement and was created with full knowledge that living things should be patentable. The PVPA was then enacted later to extend this coverage to sexually reproduced plants. After concluding that the two plant patent acts were simply enacted to aid the patenting of life (by lowering written description requirements and such), the Court then argued that the PVPA’s explicit exclusion of bacteria represented little more than an assumption that bacteria was covered under utility patents or that bacteria were not plants.

This dismissal of the importance of Congress’ exclusion of bacteria doesn’t hold up, however. First, the argument that bacteria were excluded because they had been patented before rings false. Did Congress anticipate overlapping coverage or not? If the Court can argue that bacteria were excluded because they were already covered then doesn’t that speak to a Congressional intent to have separate intellectual property domains under utility patents and PVPA? Additionally, plants had been patented before the passage of the 1930 Plant Patent Act. Yet Congress still saw it necessary to explicitly provide patentability. Secondly, the argument that bacteria were excluded because they were recognized as not being plants is on even shakier ground. Why then were some plants also excluded from the PVPA (namely tuber-reproduced plants and hybrids)? Why didn’t Congress also exclude animals, which are also not plants, since that issue was certainly foreseeable in 1970? Why did the Court see no issue with utility patents overlapping previous patent acts, when Congress had explicitly set non-overlapping boundaries for the PPA and PVPA?

Furthermore, if the purpose of the PPA and PVPA was simply to aid patenting of life why then restrict certain life from being “aided”? Such limits seem arbitrary if Congress intended that “everything under the sun made by man” should be patentable. Indeed these limits were far from arbitrary and instead delineated one of the clearest and finely-tuned sections of property law. What other property law resulted from over 50 years of back and forth demands and compromise between rights holders?

True, the Supreme Court regarded this as a narrow issue of whether a modified bacterium was an article of manufacture (i.e. made by man) under 35 USC 101. Many of these property rights issues were not even considered or only considered in passing. However, in simply trying to provide statutory construction for a single section of US code, the Supreme Court altered 50 years of explicit limits on plant patents—limits which preserved the rights to save seed, modify patented plants without infringement, and reproduce some patented plants for research.

It has become all too obvious in the 20th century that one property or human right usually ends or has its limit when it collides with another basic human right. Though perhaps more obvious in cases of nuisance or trespass, past jurisprudence increasingly looked to economics to solve these conflicts of property rights. Congress as a legislative body with popular representation and extensive hearings from industry representatives is best positioned to delineate property rights. However, in some cases the legislative intent is not clear or did not anticipate the issue being decided.

In the pages that follow I will attempt to reconstruct not only the Congressional intent and reasoning that gave the Plant Patent Act, the Plant Variety Protection Act and utility patents their respective limits but also the economic benefits and logic behind these very specific limits and exceptions. The dissent opinion in Diamond v. Chakrabarty was overall an argument that the explicit limits in the PPA and PVPA must mean something and concluded with some relevant economic repercussions. This paper is not so much a critique of the Supreme Court’s majority opinion but rather a call to Congress to develop a statutory framework that more explicitly defends the rights of farmers and researchers as already enshrined in the PPA and PVPA.

When it enacted the Plant Patent Act, Congress did more than simply amend the written description and explain that plants were patentable (as the Supreme Court has suggested). First, and most importantly, the PPA limited patentability to only asexually reproduced plants. This was done mostly because sexually reproduced plants were thought to not reproduce according to type. The bill also limited infringement to provable asexual reproduction. This meant that the first step to any plant lawsuit was establishing the chain of asexual reproduction (and therefore ownership) from the patented parent plant to the infringing descendants. Both of these limitations anticipated the costs of enforcement.

If a patented plant, one patented based purely on description, could change its characteristics via sexual reproduction how could infringement be proven (or more importantly disproven)? Furthermore, if said patented plant also reproduced sexually and was to come up in a future crop how could willful infringement be proven? Or even more fundamentally, what if a tuber-reproducing potato was left below the ground accidentally during harvest and reproduced asexually into another potato plant the next season? How could this infringement be proven? Rather than let the courts decide these difficult questions, Congress excluded these situations altogether with limitations to provable asexual reproduction of non-tuber reproduced plants. Essentially, these three statutory limitations in the PPA function to minimize enforcement costs and frivolous/misguided lawsuits.

Additionally, in enforcing the PPA the Courts held that any alteration of the patented plant removed the possibility of infringement. This was simply a by-product of the limitation of the PPA to asexually reproduced plants which inherently precludes alteration. However, the resulting case law allowed for low-cost development and research of new varieties based on already patented plants. This was essential for the steady progress of plant development which at the time required successive generations and iterative improvements (unlike current genetic engineering). This research “loop-hole” was later explicitly codified when Congress expanded the PPA to sexually reproduced plants under the PVPA.

When Congress decided to expand plant patents to include sexually reproduced plants through the Plant Variety Protection Act in 1970, it did so with even more limitations. First, Congress removed PVPA evaluation from under the US Patent Office and put it under the US Department of Agriculture. Second, certificates were issued, not patents. Besides similar protection term lengths, these certificates were very different from utility patents.

The requirements for patentability under the PVPA mirrored those inherent in asexually reproduced plants. The plant variety had to be new, distinct, uniform and stable. Furthermore, the PVPA provided exemptions to allow farmers to sell and save seed for replanting (the farmers exemption) while also permitting modifications without infringement (the research exemption). This preserved a long held farmers right to sell and replant seed from their crops. Economically this was essentially the same as a first sale doctrine since a farmer would no longer need seed after the first purchase and could also resell it thereafter (perhaps Bowman wasn’t that far off in his assumption). Furthermore, the explicit limitations prevented the inadvertent infringement lawsuits by allowing replanting and transfer of seeds.

The PVPA also explicitly excluded fungi, bacteria and tuber-propagated plants. In the case of each type, inadvertent infringement and therefore enforcement costs would be prohibitive. Fungi, for instance, reproduce both sexually and asexually in a variety of ways some of which resemble tuber-propagation. Their spores are microscopic making transfer impossible to witness. Likewise, bacteria present the same difficulties with the added problem that they can selectively transfer genes between each other. This would make even patented genes transferable without any reproduction whatsoever!

In the wake of Diamond v. Chakrabarty and the subsequent In re Hibberd ruling by the patent office, patents on life have changed dramatically. Plants, bacteria, fungi and animals are now patentable without limitation. Economics and human rights provide no support for the resulting marketplace. For instance, organic farmers who obviously aren’t willfully reproducing GMO seed were forced to sue Monsanto, a seed developer, to extract a promise that they would not be sued for inadvertent infringement. Farmers can no longer replant their seed. Indeed even replanting unpatented seed which may have cross-pollinated with patented seed in nearby fields would open a farmer up to infringement. Though Monsanto has wisely chosen not to expose these flaws in the current system by suing some hapless farmer, the system is nevertheless flawed.

When even weeds are obtaining the genes from patented plants, as in the round-up ready gene’s case, what are the “meets and bounds” of a plant patent? When the world is awash with patented bacteria which could be digesting oil in your bath tub, who isn’t an infringer? All property requires boundary lines and the stronger the property right the clearer that line should be. This is especially important when multiple rights are conflicting as farmers’ rights and seed developers’ rights are here. When boundary lines are left to promises not to sue, selective and arbitrary enforcement, and the direction the wind blows, the market takes on a significant social cost. This social cost is uncertainty. The plant patent paradigm in place between the PVPA and Diamond v. Chakrabarty drew these lines in effective and reasonable, though not the most profitable, places. We should consider the 50 years of struggle, compromise and limitations that gave us the PVPA before reconstructing patent law to include “everything under the sun”.

Corporate Structure and Outcome

The past couple decades have seen some giant corporations disappear overnight. Enron, Lehman Brothers and Bear Stearns disappeared in a matter of days. The past couple decades also had some of the worst corporate oversight ever. The two oversight bodies of a traditional firm are the shareholders and the the Board of Directors. The Board is usually charged with the presentation of the shareholders’ interests to the management of the company (i.e. the CEO, CFO, COO, etc). Increasingly throughout the 1980s and 1990s, the Board’s chairman would also be the CEO. In other words, having the CEO in charge of his own supervision became a norm.  The increased power of the CEO in America’s corporations led to higher compensation, higher risks, disregard of shareholders and spectacular failures. The saga that follows has two parts: the story and the structure/behavior relationship.

The first important question is why would shareholders ever relinquish their oversight power to the CEO? The trend toward unsupervised management began during the huge growth in the technology sector called the dot-com bubble. Normally when a startup goes public and sells shares of stock in an IPO, the new responsibility to shareholders forces tough choices between growth and profit maximization. The dot-com bubble was famous for IPOs that sold spectacularly with no revenue or even plans for revenue. When investors finally realized that their investments were simply monetary alchemy with all free services and no profitable outlook, the market crashed. Throughout the dot-com bubble and even after, the tough choice between growth and profit was never forced.  Shareholders had traded in their power for profit, buying into the idea that their management must respond quicker and make bigger bets in an ever-changing world. Sadly, the CEOs of the Fortune 500 tried to bring the same management structure into their companies. The result was needless bets and shareholder marginalization.

Fundamentally, structure gives rise to behavior. CEOs who have more power and freedom, essentially have the company riding on their shoulders. Management likes this because it can justify higher salaries and allows them to make a name for themselves. A CFO who doubled earnings per share could move on to CEO at another company. This structure emphasizes short- term risk taking for short-term benefits. The average investor going long on a stocks prefers stability and year-over-year revenue growth. Additionally, investors would prefer the CEO and management not be a single point of failure. Therefore, CEOs who must answer to their Boards and shareholders must focus on improving the fundamentals and working on the margins. Neither of these tasks are all that glamorous. As a result, shareholders are at odds with the “entrepreneurial” CEO structure.

The choice of corporate structure and behavior should result from the outcome desired by the shareholders. Small-cap stocks and IPOs are chosen for growth and are allowed to take risks. The startup culture that has grown since the early 1990s is predicated on the startup management taking risks, making bets and staying flexible. Profit is almost never the primary goal. Growth is paramount. For instance, Facebook (FB) has never made a profit for its shareholders. Nevertheless, it is valued at 163% of earnings because it continues to grow. No one complains that Mark Zuckerberg of Facebook is both the CEO and Chairman of the Board. In this environment, the risks are not only necessary but also preferred by investors.

On the other hand, blue-chip stocks (large market cap) are chosen by long-term investors for their stability. Top-level competition and rivalry for rankings in Fortune’s CEO list doesn’t benefit shareholders. Nobody loses out from the short-term betting and risk-taking more than the 401k investors who invest in blue-chip indexes like S&P 500. Since the 1950s, turnover (think failure rate) in the Fortune 500 list continues to be faster and faster. This isn’t reflective of higher competition. Rather its a reflection of the jungle that the C-suite of corporate America has become. Fortunately, the hedge fund and 401k managers have begun to fight back. Demands for new board members and new voting rules have increased from these new shareholder champions. They’ve realized that changes to the top management are meaningless so long as the structure benefits risk takers and screws the shareholder.

The lesson here is that structure defines behavior and behavior defines outcome. Shareholders who want stability and less risk must choose the appropriate structure and exercise their rights. The bonus lesson is that systems should be environment appropriate. Risk-taking should be allowed when necessary and restricted when unnecessary. As we shall see, these principles are also applicable to the government stalemates in America.

Labor System Shifts: By Industry

In 1776, Adam Smith applauded the free world’s reinvention of the old labor system. Before the fall of feudalism and the rise of the middle class, goods were produced purely for survival or at the request of the king, duke or earl. As Smith points out this method of production, where a smithy may produce everything from nails to knives to cups to plows, is highly inefficient. Specialization allows human beings to make production of a good almost instinctive and therefore more fluid and repeatable. The second triumph celebrated by Adam Smith is the division of labor. This system change allowed complicated tasks like making sewing pins to be broken down into simpler tasks which could be more easily specialized by unskilled labor. Now that cheap unskilled labor is becoming harder to find even on the global market, the labor system is evolving yet again. The automation we see in the manufacturing sector is the final iteration (possibly, nothing is ever final) of this process.

This transition in the manufacturing sector was actually delayed by two factors. First, globalization and the end of the cold war opened the flood gates of cheap labor all around the world. By the 1990s, the industrialized nations had the technology to automate many of their manufacturing processes. But with labor overseas under a dollar a day and the complicated human-machine specialization systems already in place, it was easier to move to China than invest in robots and develop new systems. Second, humans can become very efficient–often surpassing robots–when given a task to do day in and day out. For these reasons, the labor system in the manufacturing sector has been slow to change. Other industries have modernized much more quickly.

Farming was one of the industries which Adam Smith specifically identified as difficult to specialize or divide amongst laborers. This is due to both the seasonal nature and the diversity of crops grown at the time. Automated farming with harvesters, threshers, balers, etc. was developed not more than 50 years after Adam Smith made this observation. Once the motor and engine were invented, farming had self-powered automated all-in-one grain harvesters/threshers. Farming skipped a step. While organized labor was developing assembly lines and other methods of aiding the specialized worker, farming was developing tools that would allow any marginally sober unskilled driver to harvest thousands of acres. The huge turning point for farming automation, as we all know, was the Great Depression. As a result, farms spanned hundreds of thousands of acres and grew crops that could be automated–corn, wheat, sorghum, soybeans and peanuts.

Farming then took the next step, multi-purposing. George Washington Carver turned peanuts into everything from cosmetics to dyes and paints to plastics and even entirely new foods like peanut butter. Soybeans became plastics, milk, tofu, glues and foam cushions. Corn and corn syrup changed the whole processed food industry entirely. Once an industry can’t improve its production processes anymore, it takes the best and most efficient of its range of products and multi-purposes them. Most other systems are only now barely looking at multi-purposing. Only recently have car manufacturers started developing assembly lines which can produce vastly different cars with minor, over-night changes. In the near future, automated factory floors will be able to take in different inputs and produce different outputs with little more than a flipped switch.

The final frontier, so to speak, of automation and labor system shift is the services sector. This is a shift where the United States has the chance of leading. Service automation began with automated calling systems and ATMs. But the services were very basic and generally horrific replacements for actual humans. It wasn’t until the advent of the internet, really only internet 2.0, that automated services became a viable option. Today we have e-file taxes, e-banking, e-bills, e-mail, e-insurance, e-harmony, e-publishing and e-learning. Even Wall Street trading is being handled by automated programs. The automation of services has just begun. Only recently could an online retailer like amazon.com be considered competition for the epitome of conventional automation: Walmart. So where is the opportunity to be found?

For America as a nation, the opportunity is in generating the programs and automated services which will serve the world for the next 100 years. Automating services requires both an expertise in algorithm coding and a sizable amount of creativity. With one of the best post-secondary education systems in the world, no country is poised to take on the upper-level development like the United States. Sure, 11th graders in Vietnam have the coding skills to work for Google. But Google and Apple didn’t become the tech giants they are today from hiring the cheapest coders. They hired the best and the most creative people they could find. These people simplified interfaces and streamlined processes at a level that requires not only a wide knowledge of the user and the environment but also the technical ability to reduce that to an algorithm. Interfaces became lickable and grandma-friendly. Cellphones became our alarm clocks, gaming systems, instant messengers, personal audio systems, calendars, notepads, cameras, and newspapers. These were the easy and obvious services to automate, the low hanging fruit. Though the multi-purposing has already begun, the vast amount of services that remain unautomated leaves the future wide open for the taking.