Author Archives: manderson

The Ideal COVID-19 Team

Every crisis movie has them, the elite team of experts that tackle the crisis often with widely differing views (e.g. Sum of All Fears). If our firm was asked to assemble a team that would make recommendations for the duration of the crisis, this is what that team would look like:

  • Two U.S.-trained virologists/epidemiologists who had actual on the ground experience battling a virus (i.e. Ebola, SARS, HIV veterans).
  • Two Foreign-trained virologists/epidemiologists who also had actual on the ground experience battling a virus. The foreign training, preferably from the BRICS countries, would be able to suggest lower cost solutions.
  • Two applied probability experts that have experience with large data sets, validation, and attention to detail over naming conventions and the meaning of variables. The best I have seen are Census Bureau employees for their attention to detail. For instance, they double program every model.
  • Two market analysis experts that have at least a decade of experience in pricing markets with high volatility and low information (e.g. oil, gold, FX, VIX).
  • Two macro economists, preferably one from an Austrian economics background and one from a Keynesian background. Maybe a supply side economist as well for good measure.
  • Three emergency room medical doctors, all without ties or payments from pharmaceutical companies. Preferably, at least one with significant experience in doctors without borders or who has seen emergency disease care under constraints up close.
  • Three pharmacists, all foreign trained and with foreign experience. It is impossible to trust US pharmacists to not have ties to US drug companies or find ones aware of foreign low cost solutions. The best would be Canadian pharmacists so they have some knowledge of US drugs.
  • Three complexity scientists with experience and publications studying tail probabilities. Preferably, these experts would help interview and select the probability gurus. One may specialize in urban issues (e.g. like the Santa Fe Institute). One may simply be used to check/question the work of others before they present (designated skeptic).
  • Three labor leaders, preferably in essential worker industries such as food preparation, trucking and public transportation.
  • Two supply chain experts preferably sourced from business.
  • Two organizational psychologists, preferably one who has studied mass media effects.
  • One administrative law and constitutional lawyer.

Why such a large team (27 in all)? Germany assembled a team of just 10 experts spanning medicine and economics.

First, large teams have low decisiveness and high deliberative capacity. The founding fathers recognized this in giving so much power to the largest body of congress. In any crisis, the self-inflicted wounds may be the easiest to avoid; these are avoided by deliberation. For instance, rather than rushing to build of models to determine impact when many of the required fundamental variables were unknown, deliberation over first steps and mitigation were a far better use of time. The complexity scientists, such as Nassim Taleb, and foreign scientists have put little faith in the models and rather suggested PPE, social distancing, and lockdowns long enough to gather data. The US experts seem to have a fetish for models.

Second, large teams are unlikely to make bold predictions that get printed when the underlying is an unknown. For instance, SARS in 2003 was suspected of spreading via aerosol transmission, but many government agencies including the WHO came out early in the COVID crisis and said that airborne transmission was not occurring. After an outcry from doctors, most walked back this assertion, though the CDC is still battling internally over this issue. These types of statements do not help create an informed public and harm the vulnerable who then may take excessive risk.

Third, large teams are unlikely to cede to the demands of one group. If Governor Cuomo had had a large team it is unlikely they would have simply adopted the hospital’s urging to take seniors back out of the hospitals while infected. One group’s emergency can be more rationalized by a larger team. Giving hard NOs is easier when no one group is dominating the team (i.e. medical doctors and epidemiologists dominate US advising teams).

Fourth, large teams have the expertise needed amongst themselves. As a result, they can use the knowledge they gain from the long-term/repeat deliberations to assess each other’s biases. Furthermore, having the necessary expertise within the group reduces the number of calls to outside experts for reports because the team is large enough to break down issues and summarize the data themselves. This prevents the citation game of unaccountable experts or experts not in the room to defend the assertion. Working groups can each contain medical and non-medical expertise. Thus, the team is able to trust or expect balance in the work produced by sub-committees. Likewise, they know that any work produced must be defended, which tempers claims and arguments.

What would be their decision making apparatus?

While large teams may not be as decisive, their recommendation procedures can be adjusted. For instance, early in the COVID crisis many in the public were simply searching for any possible mitigation techniques. So any mitigation techniques used for any other virus, for the worst-case scenario, or simple emergency preparation techniques would have informed people of actions that even if marginally useful would have cut the R0 for the virus. Getting people informed and in mitigation mode early was the basis for success in Germany vs. Spain. Preferably in this mode, the deliberations of the team would be public, transcribed, and include citations. Thus, publishing all ideas signed by one scientist on the team could be a decision mode for quick information dissemination.

Longer reports with advice to governors, mayors, etc. could be deliberated behind closed doors. Dissents could be signed and published as well.

Recommendations to the president, administrative agencies, and businesses, especially if they directed an administrative agency to act or sought to impair business, would be deliberated publicly by the team, transcribed, and well cited.

Risk assessments and areas of risk could be compiled by each group of experts or by interdisciplinary teams. With all compiled and agreed risks from each group being published. Again, awareness of risks, however, unlikely allows mitigation. Slow moving disasters often are avoided (e.g. Y2K bug).

That sounds like a lot of transparency and information to wade through?

Yes, and information is key; credibility is key; openness is key. Public health crises have always had elements of fear and distrust. It is hard enough for the public to evaluate the opinions that would be produced without the added questions and concerns over quid pro quo and additional agendas. Notice that politicians, the military, drug companies, diplomats, and hospital managers are not on the committee.

A team like this that owes no allegiance to any prior team/agency/party would certainly have performed better than the shouting matches between the CDC/NIH and White House economists that have “decided” most policy to date in the USA. For instance the Swiss team advising their politicians was able to quickly ban hydroxychloroquine (HCQ) based on a paper published in May 2020 and then re-instate its use three weeks later when that paper was withdrawn for data errors. Perhaps the usefulness of HCQ is not yet known; perhaps it is only useful in early stages or with other drugs/supplements. The problem is that other countries are trying these options, experimenting with crowd transmission, and collecting data. The US “team” is really just three separate Federal organizations (CDC, NIH and Economic advisors) with their own interests to defend and no reason to give ground. As a result, Fauci seems to only accept double blind pharmaceutical-funded studies (no other organizations can afford double blind nor is it realistic in an emergency) and the White House, on the other hand, will accept anything as the cure. Thus, the failure of the current team stems in part from an unwillingness to even acknowledge the other side’s experts or develop some baseline reasoning. It is complete break down from where I am standing.

Central Bank Methods for Managing Currency Valuation

In June of 2015 as Chinese stocks crashed, the Chinese central bank, the People’s Bank of China (PBOC), wrote to the United States Federal Reserve to ask for their advice in mitigating a stock market plunge. (Reuters) The specific advice related to how Greenspan dealt with Black Monday in 1987 including how to inject cash into the market and also provide reassuring messages to the market. China, however, was facing three challenges: it had to maintain a currency peg, support equities and target interest rates on the open market. Two months later the PBOC chose to drastically devalue their currency.

The United States and China differ in the policy options available to their Central Banks for two main reasons. First, the United States is restricted in how much it can devalue its own currency without causing global turmoil. Secondly, the United States Federal Reserve is limited in its trading ability by the Federal Reserve Act. Their actions, however, are similar to the PBOC but require different avenues to remain legal.

The Fed and PBOC are also alike in that they require their currency to be stable. The United States benefits from the Dollar being the primary reserve currency, a position which requires a stable currency at least relative to other major currencies. Likewise, China, who wishes the Renminbi to become a major reserve currency, cannot manipulate their currency openly. Therefore, both are constrained in this aspect. However, in a globalized world of free-floating exchange rates, many policy options remain available.

Continue reading

Driving Forces of the College Bubble

Since the last bubble pop in the housing market, the most discussed bubble has been the college bubble. Not only do economists not agree on whether there is a bubble, but they also don’t agree on what is driving the bubble? [1] Tuition rates and the number of graduating students has quadrupled in the past 30 years, far outpacing inflation and job growth. [2] Yet the opportunity cost of not going to college has never seemed higher. [3] After lost wages and cost of tuition is considered, the difference in earning potential has been calculated to be as high as half a million dollars. This differential is based on the pay gap between college graduates earning 80k and highschool graduates earning 43k. [3] For the most recent graduates, this number does not tell the whole story. In fact, for them the choice is not nearly as simple as the NY Times article “Is College Worth It?” suggests. This paper will address three major flaws with the much touted wage gap calculation and what it says about the underlying supply and demand for degrees.

First, all bubbles confer huge benefits to early entrants, while later entrants are faced with huge risks from entering the over-valued market. Nevertheless, as the bubble peaks, two stories are prevalent; one declaring a bubble exists and another warning of lost rewards for those who don’t buy in. This is the logic mirrored in the pay gap. College degree holders from the 1960s to 1980s were an elite group of around 10% of the population. [2] They came to hold the top positions at US companies, as the stories of mailroom to C-suite success declined. Today with 33% of the population holding a college degree, a degree alone no longer means a management position and elevated salary. These positions now call for a master’s degree. While the high salaries of the early entrants widen the pay gap, new entrants are greeted with a much different market [4]. How different is this market?

Continue reading

When Goals Limit Solutions

One of the simplest concepts in economics is the rational person trying to achieve a single goal. A consumer trying to buy an ice cream, for example. Most people have goals and most are not as simple as buying ice cream. Still the nature of the goal can have important consequences in how you obtain it. For instance, if you need ice cream but have no money, then robbing an ice cream truck is one of your only alternatives. Having just two dollars expands the number of choices and routes to your goal immensely.

This analogy has a corollary in public choice. Activists often proclaim the need to end practices still prevalent in society. In a recent article on domestic violence in the Guardian, the author concluded: “We need to end domestic violence entirely”. Similar positions have been voiced by law enforcement advocates calling for the end of crime, or environmentalists calling for the end of pollution, or anti-war advocates calling for world peace. Each position is an absolute one; That a negative aspect of society must end.

Two of these positions have been tested. Strong support for harsh criminal penalties in the 1980s and 1990s brought a shift towards mandatory minimums, three-strikes rules, and death penalties. Yet after 20 years, the data and policy studies are not sure what this harsh penalty absolutism has gotten us besides lots of prisoners. (see David and Goliath by Malcolm Gladwell) Ultimately, Gladwell arrives at the economic conclusion that eliminating crime or even reducing it too much can have too high a cost. Essentially, eliminating crime from society suffers from a marginal benefit problem as more criminals are locked up.

Likewise, after numerous studies and suggestions on how governments can reduce pollution and greenhouse gases, little progress has been made. The best solutions are carbon taxes or a carbon market which would allow the trading of “carbon credits”. Neither solution is commensurate with the extremes of the two opposing voices: industry and environmentalists. As a result, even the structure by which future progress might be made is left languishing in sub-committees. The problem with absolutism is that, even if technically possible, the all-or-nothing demands restrict the routes through which progress can be made.

World peace was the choice cause of the 70s. Now it’s a pejorative for all similarly idealistic and unlikely wishes. No one blinks an eye when Obama suggests re-entering Iraq or Putin sends troops into Ukraine. This is the folly of absolutism, idealism is likely to turn to realism and then to satire and then disregard. For progress to be made by a movement, progress must be made while it is still alive. The fault, however, lies not simply in idealism meeting realism but in limiting the paths to success.

By declaring an absolute goal, a lot of possible allies are alienated. Calling for the end of pollution renders pollution-reducing science useless, opposes industry and it’s employees, and makes the jet travelling activists easy targets for scorn. In reality, the scientists working for industry will be the first allies needed to reduce pollution once legislation is enacted; the industry employees would benefit just as much from the lack of pollution literally in their backyards and work places; the industry itself could benefit if efficient solutions saved money as well as reduced pollution; and finally, activists wouldn’t look like hypocritical idealists.

The desired goal, then, in these cases limits the solutions as well as eliminates possible allies. The goal creates the structure of the solution set, and in the absolutist case, a much smaller solution set. Few people, especially moderates, are likely to find these solutions acceptable. The results benefit only a minority. Most cases where the opponents are equally matched (industry vs. environmentalists) nothing happens. On the other hand, where the sides are unmatched (criminals vs. enraged society), extreme positions actually get implemented and hurt society as a whole (as happened in California). Better solutions abound but are not explored because moderates are not the ones calling for change. For the benefit of society, we must end absolutism [wink] and start discussing other routes to the valid goals espoused by these movements.

Corporate Structure and Outcome

The past couple decades have seen some giant corporations disappear overnight. Enron, Lehman Brothers and Bear Stearns disappeared in a matter of days. The past couple decades also had some of the worst corporate oversight ever. The two oversight bodies of a traditional firm are the shareholders and the the Board of Directors. The Board is usually charged with the presentation of the shareholders’ interests to the management of the company (i.e. the CEO, CFO, COO, etc). Increasingly throughout the 1980s and 1990s, the Board’s chairman would also be the CEO. In other words, having the CEO in charge of his own supervision became a norm.  The increased power of the CEO in America’s corporations led to higher compensation, higher risks, disregard of shareholders and spectacular failures. The saga that follows has two parts: the story and the structure/behavior relationship.

The first important question is why would shareholders ever relinquish their oversight power to the CEO? The trend toward unsupervised management began during the huge growth in the technology sector called the dot-com bubble. Normally when a startup goes public and sells shares of stock in an IPO, the new responsibility to shareholders forces tough choices between growth and profit maximization. The dot-com bubble was famous for IPOs that sold spectacularly with no revenue or even plans for revenue. When investors finally realized that their investments were simply monetary alchemy with all free services and no profitable outlook, the market crashed. Throughout the dot-com bubble and even after, the tough choice between growth and profit was never forced.  Shareholders had traded in their power for profit, buying into the idea that their management must respond quicker and make bigger bets in an ever-changing world. Sadly, the CEOs of the Fortune 500 tried to bring the same management structure into their companies. The result was needless bets and shareholder marginalization.

Fundamentally, structure gives rise to behavior. CEOs who have more power and freedom, essentially have the company riding on their shoulders. Management likes this because it can justify higher salaries and allows them to make a name for themselves. A CFO who doubled earnings per share could move on to CEO at another company. This structure emphasizes short- term risk taking for short-term benefits. The average investor going long on a stocks prefers stability and year-over-year revenue growth. Additionally, investors would prefer the CEO and management not be a single point of failure. Therefore, CEOs who must answer to their Boards and shareholders must focus on improving the fundamentals and working on the margins. Neither of these tasks are all that glamorous. As a result, shareholders are at odds with the “entrepreneurial” CEO structure.

The choice of corporate structure and behavior should result from the outcome desired by the shareholders. Small-cap stocks and IPOs are chosen for growth and are allowed to take risks. The startup culture that has grown since the early 1990s is predicated on the startup management taking risks, making bets and staying flexible. Profit is almost never the primary goal. Growth is paramount. For instance, Facebook (FB) has never made a profit for its shareholders. Nevertheless, it is valued at 163% of earnings because it continues to grow. No one complains that Mark Zuckerberg of Facebook is both the CEO and Chairman of the Board. In this environment, the risks are not only necessary but also preferred by investors.

On the other hand, blue-chip stocks (large market cap) are chosen by long-term investors for their stability. Top-level competition and rivalry for rankings in Fortune’s CEO list doesn’t benefit shareholders. Nobody loses out from the short-term betting and risk-taking more than the 401k investors who invest in blue-chip indexes like S&P 500. Since the 1950s, turnover (think failure rate) in the Fortune 500 list continues to be faster and faster. This isn’t reflective of higher competition. Rather its a reflection of the jungle that the C-suite of corporate America has become. Fortunately, the hedge fund and 401k managers have begun to fight back. Demands for new board members and new voting rules have increased from these new shareholder champions. They’ve realized that changes to the top management are meaningless so long as the structure benefits risk takers and screws the shareholder.

The lesson here is that structure defines behavior and behavior defines outcome. Shareholders who want stability and less risk must choose the appropriate structure and exercise their rights. The bonus lesson is that systems should be environment appropriate. Risk-taking should be allowed when necessary and restricted when unnecessary. As we shall see, these principles are also applicable to the government stalemates in America.

Labor System Shifts: By Industry

In 1776, Adam Smith applauded the free world’s reinvention of the old labor system. Before the fall of feudalism and the rise of the middle class, goods were produced purely for survival or at the request of the king, duke or earl. As Smith points out this method of production, where a smithy may produce everything from nails to knives to cups to plows, is highly inefficient. Specialization allows human beings to make production of a good almost instinctive and therefore more fluid and repeatable. The second triumph celebrated by Adam Smith is the division of labor. This system change allowed complicated tasks like making sewing pins to be broken down into simpler tasks which could be more easily specialized by unskilled labor. Now that cheap unskilled labor is becoming harder to find even on the global market, the labor system is evolving yet again. The automation we see in the manufacturing sector is the final iteration (possibly, nothing is ever final) of this process.

This transition in the manufacturing sector was actually delayed by two factors. First, globalization and the end of the cold war opened the flood gates of cheap labor all around the world. By the 1990s, the industrialized nations had the technology to automate many of their manufacturing processes. But with labor overseas under a dollar a day and the complicated human-machine specialization systems already in place, it was easier to move to China than invest in robots and develop new systems. Second, humans can become very efficient–often surpassing robots–when given a task to do day in and day out. For these reasons, the labor system in the manufacturing sector has been slow to change. Other industries have modernized much more quickly.

Farming was one of the industries which Adam Smith specifically identified as difficult to specialize or divide amongst laborers. This is due to both the seasonal nature and the diversity of crops grown at the time. Automated farming with harvesters, threshers, balers, etc. was developed not more than 50 years after Adam Smith made this observation. Once the motor and engine were invented, farming had self-powered automated all-in-one grain harvesters/threshers. Farming skipped a step. While organized labor was developing assembly lines and other methods of aiding the specialized worker, farming was developing tools that would allow any marginally sober unskilled driver to harvest thousands of acres. The huge turning point for farming automation, as we all know, was the Great Depression. As a result, farms spanned hundreds of thousands of acres and grew crops that could be automated–corn, wheat, sorghum, soybeans and peanuts.

Farming then took the next step, multi-purposing. George Washington Carver turned peanuts into everything from cosmetics to dyes and paints to plastics and even entirely new foods like peanut butter. Soybeans became plastics, milk, tofu, glues and foam cushions. Corn and corn syrup changed the whole processed food industry entirely. Once an industry can’t improve its production processes anymore, it takes the best and most efficient of its range of products and multi-purposes them. Most other systems are only now barely looking at multi-purposing. Only recently have car manufacturers started developing assembly lines which can produce vastly different cars with minor, over-night changes. In the near future, automated factory floors will be able to take in different inputs and produce different outputs with little more than a flipped switch.

The final frontier, so to speak, of automation and labor system shift is the services sector. This is a shift where the United States has the chance of leading. Service automation began with automated calling systems and ATMs. But the services were very basic and generally horrific replacements for actual humans. It wasn’t until the advent of the internet, really only internet 2.0, that automated services became a viable option. Today we have e-file taxes, e-banking, e-bills, e-mail, e-insurance, e-harmony, e-publishing and e-learning. Even Wall Street trading is being handled by automated programs. The automation of services has just begun. Only recently could an online retailer like be considered competition for the epitome of conventional automation: Walmart. So where is the opportunity to be found?

For America as a nation, the opportunity is in generating the programs and automated services which will serve the world for the next 100 years. Automating services requires both an expertise in algorithm coding and a sizable amount of creativity. With one of the best post-secondary education systems in the world, no country is poised to take on the upper-level development like the United States. Sure, 11th graders in Vietnam have the coding skills to work for Google. But Google and Apple didn’t become the tech giants they are today from hiring the cheapest coders. They hired the best and the most creative people they could find. These people simplified interfaces and streamlined processes at a level that requires not only a wide knowledge of the user and the environment but also the technical ability to reduce that to an algorithm. Interfaces became lickable and grandma-friendly. Cellphones became our alarm clocks, gaming systems, instant messengers, personal audio systems, calendars, notepads, cameras, and newspapers. These were the easy and obvious services to automate, the low hanging fruit. Though the multi-purposing has already begun, the vast amount of services that remain unautomated leaves the future wide open for the taking.