Friday, March 13, 2009

Review of Taleb's The Black Swan


I wrote this a while ago, mainly based on posts I had done over at Mahalanobis, and posted it on my website. I figure I'd update it and put it out here where more people might see it, as the book in question is still quite popular.

Nassim Taleb is a former trader who wrote a textbook on option and market making, and then became more philosophical in his best seller Fooled by Randomness, and now in The Black Swan. His big idea is that sometimes, unexpecting things happen: countries dissolve into anarchy, wars start, unknown authors become famous. His secondary ideas are variations on this theme, that people, especially experts, are generally biased, overconfident, and rationalize past event so they appear deterministic. Stated baldly, these assertion are hardly novel but true enough, and one can argue about their relevance in various cases. As a highly popular presentation of ideas near to my interests and vocation, I think it is worth critically examining if there is anything to his particular contribution to the literature on cognitive biases or social failures. My conclusion, in short, is no.

Taleb’s style is to severely criticize experts and authorities--lots of 'morons', 'idiots', and 'fools' out there--while implying that both he and his reader or listener are exempt from their many biases. Reading someone deflating puffed-up egos, criticizing the insular world of academics, and suggesting the experts have a huge blind spot on something important, can be fun reading. But it has to be making points that are true if new, or important if true, and here he fails to deliver.

For someone advocating doubt and criticizing expert and 'regular' people’s overconfidence and arrogance, Taleb’s writings are filled with certainty, anger, and immodesty, having the Godelian impossibility of someone shouting 'I am the most humble!' Indeed, his current popularity based on prescience in forecasting recent events, and his emphasis that this proves him correct is exactly the kind of naive confirmation based on small samples that he argues is sloppy thinking. Consistency is not a hobgoblin in Taleb's mind.

While people are generally overconfident about their diving ability or common sense, does that same overconfidence lead people to underestimate the probability of market crashes, and thus the price of insurance (eg, put options?) The data suggest the opposite is true, that is, that people overpay for such improbabilities based on hope. Survey data on beliefs are not necessarily economically important, because markets elicit results not from unmotivated an ignorant masses, but from a highly motivated and informed subset. People willing to offer ‘a side’ to such a bet tend not to be biased, and also pad their bets with a considerable safety margin so that their errors are not catastrophic, which in practice means you obtain much lower odds for improbable events than what simple surveys would imply.

A major theme of Taleb is that models of uncertainty are too precise, and this thread has a long history. Taleb's sometime co-author Benoit Mandelbrot has been trying to sell the world on the big idea of fractals in finance for several decades. James Gleick’s Chaos outlined the essence of Benoit Mandelbrot’s fractals, which takes a simple few lines of inputs to create graphics of insane complexity yet also beautiful recursive symmetry, in many cases eerily similar to nature (eg, ferns, snowflakes). In dynamic systems, you have chaotic systems that are purely deterministic though sufficiently complex that they appear random. These systems have large jumps, or phase shifts, reminiscent of market crashes or sudden bankruptcies; they have butterfly effects where small changes produce big differences in outcomes. Mandelbrot and others have been trying to apply these ideas to financial markets for many decades now (since 1962!), and the effort has not gained any traction, in spite of many papers applying this concept (search skew or kurtosis in any financial journal and you will see many papers). Mandelbrot’s big idea in finance is that finance relies on a profoundly flawed assumption, mainly that market prices are normally distributed. Mandelbrot argues market prices have much fatter distributions described by Cauchy distributions, as evidenced by the high number of 5+ standard deviation moves in financial markets.

The result of these mistaken assumptions is to understate risk, according to Mandelbrot, and so overprice stocks and underprice options, and also understate the capital cushion financial institutions need to withstand market risk. Mandelbrot’s alternative approach is based on new parameters that would replace the mean and standard deviation. His first parameter is Alpha, derived from Pareto's Law, is an exponent that measures how wildly prices vary. It defines how fat the tails of the price change curve are. The second one, the H Coefficient, is an exponent that measures the dependence of price changes upon past changes. Unfortunately, Mandelbrot himself acknowledges in The Misbehavior of Markets that no two individuals calculate the same Alpha and H Coefficient when using the exact same historical data: there is no unique way to calculate these two parameters. Thus, using one method, you could derive Alpha and H coefficients that suggest a stock is risky, using another method you would reach the opposite conclusion. This flaw probably has some bearing on its lack of practitioner popularity.

Frank Knight, meanwhile, in his classic Risk, Uncertainty, and Profit in 1921, outlined the basic idea that it is uncertainty, in the form of non-quantifiable dispersion, that is at the root of profits. The basic idea is that risk, once quantified, is diversifiable, and thus becomes risk-free. If you know that your champagne bottle could burst while fermenting with probability p, that number becomes very manageable the larger your operation via the law of large numbers. Economists have been intrigued by this notion ever since, but by definition it is unquantifiable so when you write down a random process, it is no longer Knight-like, making it rather elusive. However, when we come up with uncertainty proxies, such as volatility (surely more volatile assets are generally more uncertain), leptokurtosis (asymmetric tails), or analyst or investor disagreement, the results do not have any obvious empirical implication beyond the fact that they exist. That is, assets with greater downside tail, or analyst disagreement, conditional upon gaussian volatility measures, are not predictive of future returns.

Taleb’s career as a talking head started in 1996 when, as the author of a niche derivatives text, his claim in Derivatives Strategy magazine that the new Value-at-Risk phenomenon was worse than useless made for great debate in risk management circles. I was leading a Value-at-Risk project at the time, so of course I found his criticisms of interest. JPMorgan had just introduced this method of aggregating risks in a highly popular practitioner brief. Their approach, RiskMetrics, outlined in detail the methods of estimating volatility when you had a portfolio of currencies, bonds, equities, and even options. Previously, financial books that contained bonds, currencies, equities, etc., each had little silos of risk reports, but this showed how they could be combined, basically by putting everything into a factor approach, in which every asset has a sensitivity to a factor, and every factor has a certain correlation and volatility. This was not new—factor analysis had been around for a while—but its clear application to a tangible problem was insightful, and created a lot of buzz.

Value-at-Risk was not a panacea, but it was an improvement (the only people who use the word 'panacea' are critics). Taleb’s criticisms of VAR then are similar to his criticisms now: that a metric is not flawless, and those who believe parametric applications of VAR are fools. In a trivial sense he is right, but in the case of VAR, or specific parametric statistics, or expectations in general, there are many users who understand that tools need to be supplemented by judgment, adjustments for the parochial realities of various asset classes with their various deviations from pure 'normality'. It is a cliché on the risk management lecture circuit that you need not just technical knowledge, but judgment, mainly by senior executives who don’t have any technical knowledge. Even in these stressed times, Taleb was dead wrong on VAR, in that in spite of his criticisms it is ubiquitous as a method for amalgamating short-term risks from different instruments into a single metric.

VaR is not useful for allocating capital, or estimating the cost of equity, but it is useful in keeping your traders honest. It allows one to measure risk given various assumptions, and like any model it is garbage in-garbage out. The recent financial crisis has often been blamed on VaR. To the extent certain banks applied VAR to mortgages, using, say, a 10-day VAR based on data from the benign 1990s, was an error. Yet, the ubiquity of this error suggests it was not a mathematical mistake (math errors are random and go in both directions), rather a flawed assumption that implied benign VAR exercises: the fact that housing prices do not decline. One can say with hindsight, this was incredibly stupid, yet no one was arguing for financial institutions must be robust to this scenario prior to 2007, and the government regulators were actively encouraging no downpayment, no documentation loans as part of a multipronged effort to increase home ownership. That is, the mistaken assumption was part of a broader mistake, comprehensible to all, not some technical error by risk managers, because that assumption was not theirs to make; it was part of a zeitgeist that people seem to forget like all those who forgot voting for Nixon after he resigned. Most importantly, VAR is not perfect, nor a panacea, but the onus is on critics to describe a better alternative. Using 'judgment' or 'all one's information' seems better with hindsight, yet as foresight this is so undefinable it would be a signficant step backward.

If you were to list all the financial company bankruptcies, the one common thread would be that they blindsided investors with their exposures. Who knew Orange County had such a position against interest rates ex ante in 1994? Who knew Barings had such an exposure to a trader in Singapore in 1997? These were not properly calculated risks that went awry, nor were they outright fraud where an unauthorized intraday position blew up. They were the result of investors or management not fully understanding the risks that were being taken, which often a correctly calculated VAR number, correctly communicated, would have easily shown. The errors were problems in getting an accurate VAR, which clearly needs people getting accurate data on positions.

If operating risk is the primary reason why trading operations fail, emphasis on refining VAR seemingly misses the point. Operating risk is neglected for good reason, however, in that it is extremely difficult to quantify existing operating risks, which in turn makes it nearly impossible to evaluate methods of monitoring and reducing these risks. Just as Eisenhower stated it is essential to plan prior to battle even though once a battle has commenced the plan is useless, VAR is essential in planning the allocation of capital, yet in risky situations becomes useless. This is not a paradox, but merely the fact that when we train for competition, we practice tactics and strategies. Inevitably competition, especially competitions we ‘lose’, will bring forth situations we have not prepared for, but the best preparation for such an occurrence is not nihilism, but more practice. And indeed many new situations are avoided by practice, which is why we learn math by solving old problems, because we think these tools are useful to unknown new problems. The alternative, to instead focus on operational risk, is such an undefined objective, that it is much less salutary.

Taleb argues that the unpredictability of important events implies we should basically forget about all that is predictable, because that’s not where the real money or importance is. So from a risk management perspective, we should ignore Value at Risk, which measures anticipated fluctuations. Further, we should ‘go long’ on these unanticipated events by engaging in quirky activities on the off-chance that we randomly find something, or someone, really valuable.

Success in markets, like life, is a combination of ability, effort, and chance. Much of intelligent thought is distinguishing between what is predictable versus what is unpredictable; it is to any organism's advantage to find out what we can figure out and change, and what is forever mysterious and unalterable (eg, the Serenity Prayer). The brain is constantly predicting, trying to figure out cause and effect so it can better understand the world. Most of what humans process is predictable, but because we take predictable things for granted, they are uninteresting. We can't predict some things, but instead of resorting to nihilism, we merely buy insurance or manage
our portfolios--in the broad sense of the term--to have an appropriate robustness. Discovering certain things are basically unpredictable does not diminish our constant focus on trying to predict more and more things. People will disagree on which risks at the margin are predictable, but that's to be expected, and we all hope to be making the right choices that optimize our serenity at the margin of our predictable prowess.

From Taleb's Wikipedia entry circa July 2006, we see where Black Swan thinking goes when applied to an investment strategy:

When he was primarily a trader, he developed an investment method which sought to profit from unusual and unpredictable random events, which he called "black swans." His reasoning was that traders lose much more money from a market crash than they gain from even years of steady gains, and so he did not worry if his portfolio lost money steadily, as long as that portfolio positioned him to profit greatly from an extremely large deviation (either a crash or an unexpected jump upwards).

In fact, Mandelbrot also argues for this strategy. Taleb co-authored a paper arguing that most people systematically underestimate volatility. Furthermore, he argues there exists not only a lack of appreciation of fat tails, but a preference for positive skew, in that people prefer assets that jump up, not down, which would imply the superiority of buying out-of-the-money puts as opposed to calls because those negative tails that increase the price of puts are unappreciated.He is affiliate with some fund that tend to be long tail risk, presumably by being long deep out-of-the-money options, but selling at-the-money options, a locally delta and vega neutral strategy.

These assertions present some straightforward tests, which a Popperian like Taleb should embrace. Specifically, buying out-of-the-money options, especially puts (because of negative skew), should, on average, make money. But insurance companies, which basically are selling out-of-the-money options, tend to do as well as any industry (Warren Buffet has always favored insurance companies, especially re-insurers, as equity investments). Studies by Shumway and Coval (2001) and Bondarenko (2003) have documented that selling puts is where all the extranormal profit seems to be. Of all the option strategies, selling, not buying, out-of-the-money puts has been the best performer historically. Further, Sophie Ni finds that out-of-the-money options are more overexpensive the degree they are out-of-the-money.

Malcom Gladwell wrote a 2002 New Yorker article contrasting the thoughtful, pensive Taleb versus the brash cowboy Victor Niederhoffer: Taleb buys out-of-the-money puts, Niederhoffer sells them. Taleb is betting on the big blow up, Niederhoffer on the idea that people overpay for insurance. Who was right? Well, Niederhoffer ran his flagship fund until September 2007 from a chalet-style mansion in Weston Connecticut . Taleb shut down his Empirica Kurtosis fund at the end of 2004, and the only public data on it suggest a rather anemic Sharpe ratio (up 60% in 2000, but then fluttered). Later Taleb described the fund as a hedge or laboratory. While neither strategy was great, and returns are proprietary, I venture that Niederhoffer's was better if you would just look at their lifetime Sharpes. Taleb's latest funds, which he is less involved with day-to-day but implement his basic beliefs in extremes, were up significantly in 2008, though this is to be expected given the extreme decline, and is similar to how Empirica started.

Taleb's big problem is that he misinterprets the mode-mean trade. A mode-mean trade is where a trader finds a strategy with a positive mode, but zero or negative mean. He then uses someone else’s capital to make money off several years of good returns, making good money for creating or managing the strategy, then, when the strategy gives it all back, the investor bears all the loss. The zero mean means that all the modal returns come crashing down in short time, generating large losses, and the manager walks away with his il-gotten bonuses from the benign modal periods. That’s a bad strategy for the investor, and the trader who manages it is either naïve or duplicitous. High Yield debt is a good example, as the stated yields are quite high, but the total returns to B rated bonds is the same as for BBB rated bonds, at several times the risk (and very concentrated in recessions). However, just because selling puts is a bad strategy, it doesn't mean buying puts is a good strategy. A Sharpe of 0.2 is a bad long position, but a worse short.

A ‘Black Swan’ is something that is totally unexpected and important. [European] people assumed all swans were white, but then they saw a black swan, and everyone certain all swans were white was wrong! That’s a 'gotcha game' for people who really take seriously someone’s assertions on the color of birds. But when there’s a price involved, the payoff to such an insight is not obvious, if not totally absent. For example, London bookmakers offer ‘only’ 250-1 odds a perpetual motion machine will not be discovered, and 100-1 odds aliens won’t be contacted: longshots ignored in a casual context are usually overpriced in actual markets.

In option markets, there is a volatility smile, whereby out-of-the-money options have higher implied volatilities, especially on the downside. For example, in May 2006 when rumors of GM's woes were large and its stock price was around 32, GM options had a one-year at-the-money implied volatility of 60, but down at a strike price of 15 its volatility is a much higher 140. The fact that Black-Scholes assumes lognormal returns does not imply market participants think likewise, so it is simply incorrect to assert that a market collapse of 23 standard deviations has a infintesimal probability based on the normal distribution, because real markets are aware of fat tails. Perhaps options were priced this way in the 1980's, but since then, there is a volatility smile that directly captures non-nomality. You can't profit from the idea that market returns have fat tails because that's priced into the market via the volatility smile, and this volatility smile shows up in 'disaster' insurance of all types: People pay a lot to sleep easy. Many people have looked at option prices, and they all find that out-of-the-money puts are the most overpriced of all options—people are expecting ‘Black Swans’ too much on average.

Taleb responds by noting that the 1987 stock market crash changes everything, because if you bought puts then, you would have made enough money to make up for decades of otherwise weak performance. While Shumway and Coval do not include 1987 in their academic study, Bonderanko does, and gets the same results. Taleb points out several anecdotes of financial market crises as further evidence of the importance of financial debacles, such as the 1998 crisis related to Long Term Capital Management (implied vols, libor spreads, skyrocked), or the emerging markets blow up of 1997. These events made money for people with long volatility, or specifically long puts, but the plural of anecdote is not data, so he should have cited an empirical paper showing the positive abnormal returns to taking on fat-tailed or asymmetric risk. Though it is easy to recall extreme events that would generate large fortunes to those on the correct side, one has to put them in full context, against the cost of insuring against these events over long periods of time. What is the sample space of all things one is insuring against? Stasis is data, as Stephen Jay Gould used to say. The volatility smile, and large bid-ask spreads in the extremes as a function of price, imply you can’t make extranormal profits over the long run by going long ‘Black Swans’ - at least in the markets where Taleb has the most experience (though not, according to him, expertise, which is more philosophically oriented).

The bottom line is that people tend to underappreciate low probability events when they are immaterial--because they are immaterial! So they underestimate the prevalence of Black Swans because if you find one, who cares? But hurricane insurance, a 3-delta put option on GM? You will pay up for that.

In the end, he promises to teach us how to take advantage of these Black Swans. His strategy is pretty simple. He argues for a barbell strategy of much safety, and a dollop of wild risks, which is, basically, an exposure to something totally unquantifiable, like Llama farms, or any of the myriad opportunities neighbors, spammers, and late-night paid-TV tout. In the context of Tobin’s two fund separation theorem, this means the ‘efficient’ risky portfolio is the most insanely unquantifiable and risky portfolio you can imagine, tempered by its modest allocation. Yet this implies the unquantifiable and risky portfolio has very good returns, which by definition (unquantifiable) is merely an assertion. As per the super safe assets, the only consistent risk premiums are from extending from overnight to a couple years in bond maturity, and from going from AAA to BBB credit risk. Super safe, is generally 'too safe', in that economists find this risk premium outsized relative to its volatility or covariance difference.

There is good reason to suspect one loses money, on average, on the wildest risks. Consider longshot odds at the racetrack and the highest payout (and thus riskiest) lottery tickets. Researchers have found a negative return premium for highly volatile stocks. Applied to ‘uncertainty’, this same pattern holds, as stocks with the most earnings forecast estimation error also have the most volatility, so it is no surprise they too have a negative premium. Truly improbable scenarios generally involve more hope than rational investment, as people will pay you to help them dream of the chance to become incredibly rich in the same way that the biggest lotteries, with 100 million to 1 odds, have the highest jackpots and the lowest mean returns. There are an infinite number of companies that directly target people wishing to make an end run around the rat race, and most of these companies are engaged in selling nothing more than hope (estimates are that only 2% of proposed home-based businesses touted on the internet are legitimate business opportunities).

Black Swan argues that standard statistics is flawed because it is backward looking — it uses ‘historical’ data — and argues that standard measures of risk like the normal distribution are ‘frauds’. I too prefer future data, but it ishardly a practical alternative. The Gaussian distribution is common in theory because it is so analytically tractable; it often creates closed form solutions that allow one to see how one variable affects another, and has nice properties, such as the fact that two Gaussian random variables added together is also a Gaussion random variable. In practice, no one actually believes in this view, and makes ad hoc adjustments, such as the volatility smile for option prices. The key is that from an expositional point of view, the Gaussian distribution usually gives one the gist of the true ‘fatter-tailed’ distribution, and allows easy exposition. Non-economists often giggle at the term ‘fat-tailed’ or homoskedasticity, but indeed most real world distributions are not ‘Normal’ or Gaussian, they simply have fatter tails than average. Does this imply statistics is a fraud? Well, if you mistake the map for the territory, indeed, this is news. For everyone with some common sense it’s an approximation or expositional device.

Taleb belittles predictions that have large or unmentioned error rates, yet any specific error metric (standard deviation, value-at-risk, correlation, R2, etc) is, in his mind, a fraud and useless because it relies on an assumption, one that is 'wrong'. He argues we reward those who imagine the impossible, but what does that mean in practice? That we encourage people to enumerate everything possible no matter how improbable? In finance, these risk reports are all too common because they reflect a lot of work, in that generating a list of unprioritized things that could happen is easy but practically useless, because you simply can’t address all the points and so must leave them as mere ‘I told you so’ observations. One can remember Richard Clarke’s vague warning about Al Qaeda prior to 9/11, which in no way suggested that changes to hijacking protocols or airline boarding should be made, but rather that something could happen, true but unhelpful.

Getting people to highlight wild risks comes easy, which I think is a big part of his book’s appeal. Legislators and personal-injury lawyers eagerly hype risks with negligible real impact, like secondhand smoke, or getting cancer from trace amounts of chemicals. Sometimes they create considerable public concern about risks that don't exist, like that of contracting anti-immune disease from breast implants, or cell phones causing cancer. Newsrooms are full of English majors who make confident pronouncements about global-warming or some other complicated process, all in hopes of getting viewers or readers activated.

I could imagine Taleb teaching a statistics class to freshman and instead of starting with the arithmetic mean and standard deviation, asking 'what was the probability of an airplane taking down the World Trade Center on September 10, 2001?', and waxing poetic about how ‘we just don’t know!” Students might think such talk is much "cooler" than boring formulas, but such confused thinking leads nowhere in particular and can be indulged indefinitely without producing anything useful, as Taleb demonstrates. Of course one needs technical knowledge and common sense in anything, but while you can teach one, you can't teach the other. We teach statistics, calculus, etc., not because it solves every problem, but because it can help in many problems to delineate and potentially manage that which we can change from that which we can’t. Surely a college department of 'wisdom' or 'good judgment' would be a valuable thing, unfortunately no one can agree on the curriculum.

Martin Gardner wrote a popular column for Scientific American, and in the process received a lot of mail from ‘cranks’ telling him about perpetual motion machines and the like. So he wrote a book called Fads and Fallacies. In the book he describes "cranks" as having five invariable characteristics:

  1. They have a profound intellectual superiority complex.

  2. They regard other researchers as idiotic, and always operate outside the peer review

  3. They believe there is a campaign against their ideas, a campaign compared with the persecution of Galileo or Pasteur.

  4. They attack only the biggest theories and scientific figures.

  5. They coin neologisms.

On his personal website, Taleb once described himself as being "an essayist, belletrist, literary-philosophical-mathematical flâneur," a conception that some people finding endearing, me not so much. Literary-philosophical-mathematical types,- especially flâneurs - tend to be 'full of themselves,' supporting Gardner ’s characteristic #1. He prides himself on not submitting articles to refereed journals, considers most people who are indifferent to him as fools, and disdains editors, even spellcheckers (#2). He proudly notes that someone told him “in another time he would have been hanged [for what, inanity?].” Wilmott Magazine, a quant publication published by his colleague Paul Wilmott, wrote a fawning article about him in which they noted that he is “Wall Street’s principal dissident. Heretic! Calvin to finance’s Catholic Church” (#3). His website states his modest desire to understand chance from the viewpoint of “philosophy/epistemology, philosophy/ethics, mathematics, social science/finance, and cognitive science”, supporting #4. Lastly, for #5, he has gone so far as to print a glossary for his neologisms (eg, “epistemic arrogance” for “overconfidence”). In Martin Gardner’s taxonomy, Taleb is a classic crank.

Clearly his experience as an options trader gives him credibility, but I think this is a big issue, that of successful brokers thinking they made their money off investing insights. Before he became a regular on the talking-head circuit and expert on Judeo-Arabic philosophy, he was primarily a trader for large market makers. These are not speculators investing their wealth based on insights, but more like brokers, making money off customer flow, only their buyers and sellers are the guys on the phones talking to retail clients. Such traders spend most of their time looking at a model such as Black-Scholes that tells them what price to buy and sell based on some underlying parameters. These models are more or less standard, and so the main thing the market maker has to do is keep his model inputs fresh, post prices to potential buyers and sellers, fill market orders, and pick off stale limit orders. Customers generally have access to older prices, and in a situation where the current price moves every second, this clearly puts a trader at an informational advantage, which is why it was such a lucrative field, especially in the days before the internet became big (ie, Taleb's time). The trader makes money irrespective of movements in the underlying model price, as in general he keeps his exposure to first-order (eg, delta) and second order (eg, vega) risks as close to zero as possible.

But such trading skill is quite distinct from what a speculator or investor does, which involves a directional bet on the first or second order risks that traders normally try to erase. Traders know as much about what makes prices move as plankton knows about what makes the tides move. Much of being a trader is encouraging trading activity from a hesitant broker, and so many traders are quite adept at presenting themselves as more than middlemen, but also men with an angle or a story. A good trader is probably truly delusional about his prognostic abilities because this allows him to appear sincere in his sales pitch for the latest trade idea; those who don't believe their own stories make weak sales pitches (see Robert Trivers, who Taleb mentions favorably in The Black Swan, and note the irony endemic in his writings). Most of these traders are certain they could make money without their customer flow, because the same self-deception that serves them well chatting up brokers or impressing their boss generates delusions of strategic grandeur. Supreme self-assurance, even if undeserved, just as much as knowing your Greeks, makes for a good trader . Thus Taleb’s ‘narrative fallacy’ argument plays right into his own biases, that is, he has fooled himself into thinking he knows 'the big picture' because that delusion was helpful in his own career.

Rich investor or rich broker: Who is the more easily fooled about his alpha? Notice the relation to the theme of Fooled by Randomness? Taleb is consistently amusing because his criticisms of others apply so neatly to himself: he claims he is an empiricist yet supports his points with anecdotes. The Black Swan makes fun of ‘experts’ with credentials, but he states he does not deign to engage with anyone not sufficiently expert; he states he is not interested in being a speaker-bureau commodity , but routinely travels the rubber chicken circuit; he derides forecasters who don't give a full accounting of their prior forecasting history, yet delinks old remarks about Value-at-Risk, and recategorized his extinct Hedge Fund as a hedge, not a fund; he claims to prize humility, yet is most immodest; he argues against applying the law of large numbers, and also of inferring too much from small samples; people apply models to reality in biased manner, people naively extrapolate data without the appropriate theory; forward thinking is adaptive, forward thinking is error-laiden. Some people think inconsistency is a sign of genius; I think it just reflects confused thinking.

Inconsistency is the major problem with Taleb's oeuvre. For example, he often praises the work of Danny Kahneman as one of the uniquely prophetic economists, famous for his 'prospect theory' that explains why people will be risk seeking in small losses in but risk loving over large gambles. That is, Prospect Theory was invented to explain why people will pay small amounts to gamble, but are risk averse over gains. Yet Taleb argues that a predominant financial vice is the mode-mean trade, where people desire to make a little bit every day, often at the expense of blowing up on 'fat tail' events. These are opposite theories, which would not be a big deal if they were minor assertions from these men, but in fact they are the signature financial hypotheses of each. Taleb may appreciate Kahneman's diverse work, but one would expect him to be a harsh critic of this seminal idea, not the huge unqualified fan he is.

To be popular it is helpful to make people think they are learning something new about something novel and important. Yet the masses do not really like novelty, they like affirmation of their inchoate prejudices. Thus, a reader can leave The Black Swan thinking that any expert is either a charlatan or a fool except Taleb and those smart enough to appreciate him, a group that prides itself on knowing what they don't know, that any specific model is imperfect and therefore evidence is naive Platonism. The current financial crisis may make radical theories that suggest junking existing theory more attractive, but remember that the Great Depression was a Black Swan, and this did not help macroeconomic theory so much as lead it into the desert for 40 years, giving many a wasted life championing not merely a welfare state, but socialism and all its unintended horrors. If something really unpredictable happens, the large number of perennial disparate forecasters of disaster, combined with bayesian statistics, still implies those calling for the end of times are probably 'lucky fools', as Taleb would say. I do agree his claims of an extreme event were spot on over the past year, but this is no less impressive that Henry Blodget's Amazon call in 1998, Elaine Garzarelli calling the 1987 stock market crash, or Angelo Mozilo's subprime investment success up to 2006 made him a business visionary. I look to broader historical data and see buying out-of-the-money options is poor investment strategy, so I don't consider recent events proof of some really useful truth.

To the degree The Black Swan has arguments about the essence of risk they are at least a generation old, even if many are pleasantly introduced to them for the first time (fat tails see Mandlebrot (1962), nonquantifiable risk see Knight (1921), for various cognitive biases see Kahneman, Slovic and Tversky (1982) which was a compilation of papers mainly done in the 1970's), and these books have spawned, or are clearly referenced, by literally thousands of books. The Black Swan may popularize the concept of low probability events, what were called 'peso problems' (see Rietz 1988), and that would be a good thing. But ultimately, the bumper sticker "shit happens" is kind of funny, kind of true, but hardly profound.

No comments:

Post a Comment