Thursday, September 3, 2009

The Statistics of Prediction

Given the high level of ambiguity in the economy and markets at the moment (both gold and Treasuries rallying?), I thought it might be useful to revisit the concept of forecast error. Economic forecasters, even (perhaps especially?) the top, highest paid Wall Street celebrity economists, are egregiously poor predictors of stock market levels or direction over any meaningful time frame.

Robert Prechter does an excellent job of describing the logical fallacy about economists:

From Elliott Wave Theorist, May 2009

"Although it has suddenly become fashionable to bash economists, I would like to point out that economists are very valuable when they stick to economics. They can explain, for example, why and individual's pursuit of self interest is beneficial to others, why prices fall when technology improves, why competition breeds cooperation, why political action is harmful, and why fiat money is destructive. Such knowledge is crucial to the survival of economies. 
Economic theory pertains to economics, but not to finance and so-called macro-economics. Socionomic theory pertains to social mood and its consequences, which manifest in the fields of finance and macro-economics. 
If you want someone to explain why minimum wage laws hurt the poor, talk to an economist. But if you want someone to predict the path of the stock market, talk to a socionomist. 
The two fields are utterly different, yet economists don't know it. 
How Correct are Economists Who Forecast Macro-Economic Trends? 
The Economy is usually in expansion mode. It contracts occasionally, sometimes mildly, sometimes severely. Economists generally stay bullish on the macro-economy. In most environments, this is an excellent career tactic. The economy expands most of the time, so economists can claim they are right, say, 80 percent of the time, while missing every turn toward recession and depression. 
Now, suppose a market analyst actually has some ability to warn of downturns. He detects signs of a downturn ten times, catching all four recessions that actually occur but issuing false warnings six times. He, a statistician might say, is right only about 40 percent of the time, just half as much as most economists; therefore the economist is more valuable. But these statistics are only as good as the premises behind them. 
Suppose you eat at an outdoor cafe daily, but it happens that on average once every 100 days a terrorist will drive by and shoot all the customers. The economist has no tools to predict these occurrences, so he simply 'stays bullish' and tells you to continue lunching there. He's right 99 percent of the time. He is wrong 1% of the time. In that one instance, you are dead. 
But the market analyst has some useful tools. he can predict probabilistically when the terrorist will attack, but his tools involve substantial error, to the point that he will have to choose on average 11 days out of 100 on which you must be absent from the cafe in order to avoid the day on which the attack will occur. This analyst is therefore wrong 10% of the time, which is ten times the error rate of the economist. But you don't die. 
How can the economist be mostly right yet worthless and the analyst be mostly wrong yet invaluable? The statistics are clear - aren't they? 
The true statistics, the ones that matter, are utterly different from those quoted above. When one defines the task as keeping the customer alive, the economist is 0% successful, and the analyst is 100% successful.  
When consequences really matter, difference in statistical inference can be a life and death issue. In the real world, business people need timely warnings, and realize that economists miss most downturns entirely. Would you rather suffer several false alarms, or would you rather get caught in expansion mode at the wrong time and go bankrupt?  
Focus on irrelevant statistics is one reason why economists have been improperly revered, and some analysts have been unfairly pilloried, during long-term bull markets. But economists' latest miss was so harmful to their clients that their reputation for forecasting isn't surviving it."
The following table offers a stark example of forecaster fallibility. Every December, Barron's financial journal gathers the top analysts from Wall Street, names most investors are familiar with, and asks them to forecast the value of the S&P500 on December 31st of the following year. Granted, this is an impossible task; a probable range might be more reasonable. But more importantly, it is also useless because markets can travel an infinite variety of paths to get from A to B, and the paths are important to anyone with an active view on the markets. Regardless, these analysts were off by such a large factor that it can legitimately be claimed that they offer no predictive value, whatsoever.


Source: Bloomberg, Butler|Philbrick & Associates

Note that the average estimate of 1650 from these 12 Chief Economists was 82% above the actual closing value on December 31st.

A recent study by James Montier of Societe Generale suggests that stock strategists at Wall Street’s biggest banks -- including Citigroup Inc., MorganStanley and Goldman Sachs Group Inc. -- have failed to predict returns for the Standard & Poor’s 500 Index every year this decade except 2005. Their forecasts were w
rong by an average of 18 percentage points, according to data compiled by Bloomberg.

There is a mountain of evidence supporting the view that economist forecasts are statistically no more accurate than random guesses around a long-term trend. Yet most, if not all, Advisors eagerly follow the views of their favorite economist closely, and generally adhere to their recommendations. What is the definition of 'deranged'? Repeating the same mistake over and over while expecting a different result.

No comments:

Post a Comment