A AAA rating means about a 1 in 10,000 chance (0.01%) of defaulting, annually. This is very difficult to get at via mere experience, but was plausible because default rates for regular AAA corporate debt for Moody's from 1920-07 was zero. Thus, they figure it was less than the AA rate, which actually had some defaults, around 0.06%, and so seemed a reasonable extrapolation. But a slew of AAA rated paper trading at 50 cents on the dollar basically means that the AAA rating was wrong, rejected, statistically, at the 0.001% level. So if AAA ratings are wrong, how does one adjust them? Is it now 1 in 100 (like a BB+ rating), as opposed to 1 in 10,000?
Such changes in probabilities are not peculiar to finance. Note the the Space Shuttle prior to the Challenger disaster in 1986 had an internal probability of error at 1 in 100,000; after the crash, this was raised to 1 in 50. If this is the magnitude of the default perception change it will take a long time to change, because via bayesian inference the rating agencies will need several years of cross-sectional observations to get these estimated probabilities back near old levels. There is no way to speed this up, such is validation.
Rating agencies used to rubber stamp some types of securities with ratings using deference to the market and path dependence, and so now everyone is skeptical of all of their investment grade ratings. The result is that everyone assumes the worst, and only the Federal Government has a AAA rating, because they can always print money to pay debts. Thus the market now is missing a credible stamp of investment grade (which implies, usually, you can assume its true), everything is trading like its junk (ie, less than BBB-). Even worse, via an ordinal ranking this means that B and BB rated bonds trade like distressed securities. The market is constipated because the secondary market for debt has been shut down, as previously useful ratings are now considered suspect, and our financial institutions do not have the wherewithall to warehouse all the debt in the economy. The New York Port Authority had no bids for AA rated bonds, clearly showing no one believes they are near that quality.
But consider that since this bad mortgage debt was not only rated AAA, but traded as if it were 'risk free' back in 2006. If debt is trading as if it is AAA, if would be very difficult to defend rating it as, say, BB. The market is so big, many people are independently corroborating your judgment. Surely many of them looked at these deals in detail; indeed, probably hundreds of thoughtful people did, including regulators at Office of Federal Housing Enterprise Oversight (OFHEO) and the Fed. The rating agencies were analyzing mortgages the way everyone else, with very different incentives, did. As the market weakened underwriting standards from the 1990's onward, it seemed to make no difference, and academics and regulators were saying these new mortgage innovation were not material. Legislators with a lot of regulatory power implied these were morally righteous changes.
If the rating agency decided in 2006 that the cumulative effect of these changes increased the probability of default from 0.01% (AAA) to 0.50% (BB+), this not only would have been ridiculed, if a convincing case was put forward this would have shut down the housing bubble earlier. But then wouldn't the rating agency then have been blamed for this entire mess? After all, the total amount of bad housing debt is a fraction of the total loss in wealth from this financial debacle, as some unknown accelerator mechanism has destroyed at least 10 times the value of the bad assets at the center of this debacle. Estimates of housing wealth destruction were 'only' around $200B back in March 2008, after the mortgage market had collapsed. Year to date global stock markets have destroyed over $10 trillion subsequently. Now, if Moody's had the prescience in 2006, to see say 50% of this, and caused, say, a $1 trillion mini-correction but avoiding the $10 trillion loss, would they have been congratulated? Remember, you don't get to view alternative time-paths in the multiverse to prove your actions mitigated a disaster, rather, you are left looking like you screwed everything up.
Further, that's all with hindsight. The graph above shows how Investment Grade default rates vary over time (mean about 0.15%). I wouldn't know how to characterize that distribution, with a mode at zero, and some positive numbers that seem to have a non-stationary distribution. Such distributions get funkier with lower default rates. Since low-probability defaults are clustered in time, you probably will not observe the event that will prove you right in your working life. The net benefit to making a change in prospective default probabilities from 0.01% to 0.50% thus would be probably not be something you would expect to 'prove' in your working career at Moody's. Thus, standing athwart history yelling 'Stop!' sounds neat, but in reality that's only realistic when you have no effect, because generally people do not get credit for preventing low probability events from happening.
Given the cyclical nature of default rates, I just don't see how you can design a mechanism to reliably estimate a 1 in 10,000 annual default rate given the diverse set of securities, often novel securities, they are asked to evaluate. We just don't have enough data on comparable issues. Perhaps every new security needs a top rate of BB+ until it generates 50 years of data.
No comments:
Post a Comment