Many of you reading this will know of Warren Buffett, but few will of heard of the legendary – Howard Marks. He’s a billionaire investor who writes regular ‘memos’ about economics. Many of the hedging themes I’ve adopted over the last 30 years for my equity portfolio and for my property business have been a result of my learning from Howard on how the financial markets analyse data and effectively hedge their investment portfolios for the inevitable downturn. So with this in mind, I thought I would share some of Howard’s recent thinking, post Coronavirus below – which has influenced me over the years.
I studied Howard’s work extensively during my MBA year at Smurfit Graduate School. As a younger student, I was unfamiliar with hedging or shorting equities and I certainly needed some help from my Economics Lecturer – a Professor of Economics – who first introduced me to Howard’s way of thinking, during my year doing the MBA. I was a confused young man trying to work out on a practical basis – how to implement some of Howard’s thinking? And how to implement this logic into my own property portfolio?
Unfortunately, although I had his theories in my head – much of Howard’s work didn’t actually resonate with me on a practical basis anyway until after the crash in 2007. I learned some valuable lessons on hedging theories in 2007.
In hindsight, I now regret not putting into practice, more of Howard’s teaching in my earlier career as I took a substantial hit in 2007. This was mainly due to overgearing, over-confidence – and not having a credible hedging strategy in place, and because I didn’t pay any attention to Howard’s 4. bullet points below:
I draw you attention to these interesting bullet points. It summarises my feelings too, and I hope you will also enjoy reading such an insight as I have below:
• To confuse factual knowledge with superior insight
• To conflate expertise and insight with the ability to predict the future
• To treat experts in one field as if they’re knowledgeable about all others
• To credit rich and successful people with all of the above
UNCERTAINTY II – HOWARD MARKS
I’ve written a few times about the frequency with which I come across something additive just before finalizing a memo. This time I wasn’t so lucky: my wife Nancy brought an important article to my attention two weeks after the publication of Uncertainty. The article’s appearance, along with a second potential addition, prompts me to write this post-script to that memo. I have a few thoughts to add, all generally related to the topic of foreknowledge.
No One Knows What’s Going to Happen
The above heading was the title of an excellent article by Mark Lilla, a professor of humanities at Columbia University, which appeared in The New York Times this past Sunday. (You may remember my previous discussion of our tendency to think highly of people who agree with us. I readily admit that the reason I like this article so much may lie in the fact that it confirms a great deal of what I said in Uncertainty.)
Here are some excerpts from that article:
The best prophet, Thomas Hobbes once wrote, is the best guesser. That would seem to be the last word on our capacity to predict the future: We can’t. But it is a truth humans have never been able to accept. People facing immediate danger want to hear an authoritative voice they can draw assurance from; they want to be told what will occur, how they should prepare, and that all will be well. We are not well designed, it seems, to live in uncertainty. Rousseau exaggerated only slightly when he said that when things are truly important, we prefer to be wrong than to believe nothing at all.
Apart from the actual biology of the coronavirus – which we are only beginning to understand – nothing is predestined. How many people fall ill with it depends on how they behave, how we test them, how we treat them and how lucky we are in developing a vaccine.
The result of those decisions will then limit the choices about reopening that employers, mayors, university presidents and sports club owners are facing. Their decisions will then feed back into our own decisions, including whom we choose for president this November. And the results of that election will have the largest impact on what the next four years will hold.
The pandemic has brought home just how great a responsibility we bear toward the future, and also how inadequate our knowledge is for making wise decisions and anticipating consequences. Perhaps that is why our prophets and augurs can’t keep up with the demand for foresight.
At some level, people must be thinking that the more they learn about what is predetermined, the more control they will have. This is an illusion. Human beings want to feel that they are on a power walk into the future, when in fact we are always just tapping our canes on the pavement in the fog.
A dose of humility would do us good in the present moment. It might also help reconcile us to the radical uncertainty in which we are always living. Let us retire our prophets and augurs.
Lilla’s article pulls together in one place several themes from Uncertainty and other recent memos:
• the very human hunger for forecasts to help us navigate the future,
• the conditionality of the future on multiple future developments,
• our own ability to influence the future through the decisions we make,
• the unpredictability of each development,
• thus the futility of forecasting,
• the importance of accepting our ignorance of the future, and thus
• the general importance of intellectual humility.
Articles like this one and those cited in my last memo should drive home these points to everyone’s satisfaction. But rarely will people fully accept that we must make decisions regarding the future without knowing it.
The Future as Path-Dependent
Forecasters seem to act as if the future already exists, and all we have to do is be smart enough to discern it. But that ignores the fact that all of us – and many other influences – are constantly creating the future through our collective activity.
In his article, Lilla stated, “the post-Covid future doesn’t exist. It will exist only after we have made it.” I think this is a very important concept. We might predict the future today, and we might even correctly assess what today’s conditions and actions are likely to produce in the future. But that prediction will be shown to have been right only if no one and nothing causes the future to become different between now and the day it arrives. Thus I’ll repeat what I quoted from Lilla earlier:
How many people fall ill with [the coronavirus] depends on how they behave, how we test them, how we treat them and how lucky we are in developing a vaccine.
Not only how will the virus behave, morph, travel, react to warm weather and infect, but also how fast will we reopen the economy, how will people behave when we reopen it, and what will the virus do at that time? Thomas Sowell, a Hoover Institution economist and social theorist, provided a glimpse at how these things work in another field:
Economists are often asked to predict what the economy is going to do. But economic predictions require predicting what politicians are going to do – and nothing is more unpredictable.
The unpredictability of politicians is only one of the many variables complicating the future today. Not only can’t we predict people’s actions and the many other things that will determine the course of the virus and its impact on the economy, but we also certainly can’t predict when they’ll take those actions – and that will count just as much.
Which Expert to Follow?
In the memo Uncertainty, I quoted at considerable length from an article by Erik Angner. One of its most interesting points was as follows:
People who lack the cognitive skills required to perform a task typically also lack the metacognitive skills required to assess their performance. Incompetent people are at a double disadvantage, since they are not only incompetent but also likely unaware of it. (Behavioral Scientist, April 13)
By definition, people who lack the expertise in a given field required for superior judgments also lack the expertise required to assess their level of expertise. As I mentioned, they qualify as John Kenneth Galbraith’s forecasters “who don’t know they don’t know.”
While re-reading my memo, I realized I had left out an important further ramification. Not only do most people fail to possess superior expertise – as well as the ability to know it – but they also lack the ability to figure out who does have it. That’s the catch: you may have to be an expert in a field in order to be able to figure out who the true experts are. That’s why research in most fields is subjected to “peer review,” meaning a review by experts (not to be confused with “a jury of one’s peers,” meaning other lay citizens).
And yet, where does the buck stop on the biggest of questions, like those of today? The answer can’t be “with the experts.” An article in The Wall Street Journal set out the dilemma:
To govern, at least at the level of the presidency, is to make hard choices among competing options with incomplete information. Easier problems are resolved before they ever reach the Oval Office. Neither scientific data nor public sentiments can properly answer the questions that face elected officials. Both are important and must be integrated into the judgments that political leaders make. But neither can substitute for that crucial act of judgment.
The president’s job, and not only in times of crisis, frequently involves listening to experts disagree with one another and taking responsibility for choosing among them, plotting a course through opportunities and dangers. The capacity to do this well involves its own sort of practical wisdom, an expertise in judging expertise. (“Experts Aren’t Enough,” May 16-17)
Nowadays, like everyone else, I’m bombarded with conflicting views regarding the wisdom of rapidly reopening the U.S. economy. Yet I recognize that not only is my opinion on that topic of little value, but I also don’t have the expertise required to know for sure whose opinion does count. What I do know is that the last thing I should do is choose an expert because his or her opinions agree with mine, and allow confirmation bias to affect my decision.
Further, in considering expertise, we must be leery of some dangerous tendencies in our society:
• to confuse general intelligence with knowledge of the facts relative to a given field,
• to confuse factual knowledge with superior insight,
• to conflate expertise and insight with the ability to predict the future,
• to treat experts in one field as if they’re knowledgeable about all others, and
• to credit rich and successful people with all of the above.
Thus, as I’ve described in previous memos, when I travel abroad, I’m often asked what I think of my host countries’ economies and their potential. “Why ask me?” I respond, “you live here.” Just because I know something about investing and the U.S., why should I necessarily have meaningful insight into other fields and countries?
We see doctors or public health officials on TV who inveigh against quickly reopening the economy. They may well know much more than most about the medical and public health aspects of the coronavirus and how it should be dealt with, and their advice is likely to keep the most people alive. But on the other hand, since they’re not economists, we should assume they’re only answering from the standpoint of minimizing deaths. They may not take into consideration the importance of restarting the economy or how to balance the two considerations.
On the other hand, we see businesspeople and economists talking about the need to reopen in order to minimize the damage done to the economy by keeping it in a deep freeze. But what do they know about the cost in human lives? And certainly there is no algorithm or accepted process for deciding between the two. It’s a matter of judgment, not expertise.
I recently read an article about an often-cited libertarian lawyer and legal scholar (unnamed here because of my general practice of not criticizing individuals) who predicted in mid-March that no more than 500 people would die from Covid-19 in the U.S. (revised upward to 5,000 when he later found a statistical error in his analysis). While he admitted to having no medical expertise, he said he did know more than the doctors about evolutionary theory and its applicability to the virus. His opinion apparently carried great weight at the time in conservative quarters.
Reporters, not being experts themselves, have to consult experts in order to write their stories. But how do they choose and vet the experts they cite? And to what extent are their selections a function of the biases we all tend to confirm and the conclusions they want to justify? In my experience, the more I know about a subject, the less I’m impressed with related media coverage. And likewise, elected officials are rarely expert in the fields about which they have to make decisions. They, too, have no choice but to depend on experts. But how do they choose their experts? Would they ever consult an expert who belongs to the other party? I recently read a Wall
Street Journal op-ed piece by a conservative senator suggesting categorically that conservatives and liberals are different in how they weigh re-opening the economy versus minimizing infections. Is such a sweeping (and probably unscientific) generalization more likely to be an appropriate observation or an example of intellectual bias stemming from ideological division?
So (a) true expertise is scarce and limited in scope, (b) expertise and predictive ability are two different things, and (c) we all should be careful about whom we listen to and how much weight we give to their pronouncements.
And one other thing: As Lilla wrote, “People facing immediate danger want to hear an authoritative voice” Thus they tend to put inordinate faith in a popular “prophet.” And when he or she turns out to be a less-than-perfect forecaster, and thus only human, they go looking for the next one to anoint. They never say, “I guess forecasting doesn’t work.” I would say the same about people in general, including those looking for help making money without risk or effort.
A Living Example
Finally, in a related vein, I want to mention a May 19 article by Morgan Housel of Collaborative Fund, an insightful commentator whose philosophical and behavioral observations tend to resonate with me. In it, he told the story of two friends with whom he regularly took the risk of skiing out of bounds at the resort they frequented as teens. One day, his friends went out for a second run while he begged off for no particular reason, and a freak avalanche took their lives. Here’s his summation:
I don’t know if Brendan and Bryan’s death actually affected how I invest. But it opened my eyes to the idea that there are three distinct sides of risk:
• The odds you will get hit.
• The average consequences of getting hit.
• The tail-end consequences of getting hit.
The first two are easy to grasp. It’s the third that’s hardest to learn, and can often
only be learned through experience.
We knew we were taking risks when we skied. We knew that going out of bounds was wrong, and that we might get caught. But at 17 years old we figured the consequences of risk meant our coaches might yell at us. Maybe we’d get our season pass revoked for the year.
Never, not once, did we think we’d pay the ultimate price. But once you go through something like that, you realize that the tail-end consequences – the low-probability, high-impact events – are all that matter. In investing, the average consequences of risk make up most of the daily news headlines. But the tail-end consequences of risk – like pandemics, and depressions – are what make the pages of history books. They’re all that matter. They’re all you should focus on. We spent the last decade debating whether economic risk meant the Federal Reserve set interest rates at 0.25% or 0.5%. Then 36 million people lost their jobs in two months because of a virus. It’s absurd.
Tail-end events are all that matter.
This introduces one of the great conundrums associated with investing. Since we know nothing about the future, we have no choice but to rely on extrapolation of past patterns. By “past patterns,” we mean what has normally happened in the past and with what severity. And yet, there’s no reason why (a) things can’t happen that differ from those that happened in the past and (b) future events can’t be worse than those of the past in terms of severity and thus consequences. While we look to the past for guidance as to the “worst case,” there’s no reason why future experience should be limited to that of the past. But without reliance on the past to inform us regarding the worst case, we can’t know much about how to invest our capital or live our lives.
Many years ago, my friend Ric Kayne pointed out that “95% of all financial history happens within two standard deviations of normal, and everything interesting happens outside of two standard deviations.” Arguably, bubbles and crashes fall outside of two standard deviations, but they are the events that create and eliminate the greatest fortunes. We can’t know much in advance about their nature or dimensions. Or about rare, exogenous events like pandemics.
In 2001, I wrote a memo titled You Can’t Predict. You Can Prepare. At first glance, that seems like an oxymoron. How can we prepare for something if we can’t predict it? Turned around, if the greatest extremes and most influential exogenous events are unpredictable, how can we prepare for them? We can do so by recognizing that they inevitably will occur, and by making our portfolios more cautious when economic developments and investor behavior render markets more vulnerable to damage from untoward events.
That line of reasoning suggests a glimmer of good news: we may not be able to predict the future, but that doesn’t mean we’re powerless to deal with it.
May 28, 2020