The statisticians at Zacks Investment Research used severalseparate methods to identify the best bank-earnings estimators.
Zacks used a technique called relative error in selecting top category performers for money-center banks; regional banks; thrifts; credit card and general finance companies; and private mortgage companies and government sponsored enterprises.
First, Zacks calculated the differences between earnings estimates and actual results. All the errors were expressed as positive, or absolute, values. This prevents over- and under-estimates from offsetting each other. Average peer group errors were then determined.
Finally, analysts' individual errors were normalized by expressing them as a percentage of peer group errors. The resulting ratios tell whether analysts' estimates were more or less erroneous than the peer group averages.
A score of 75%, for example, indicates an analyst's earnings forecast errors were of a lesser magnitude than the peer group's. Conversely, a hypothetical score of 125% indicates an analyst's errors were of a greater magnitude than the peer group's.
To qualify for inclusion in the category competition, an analyst had to follow at least a fourth of the companies in a peer group. Additionally, for each company followed, the analyst had to submit quarterly estimates for three of the past four quarters and annual estimates for three of the past four years.
In addition to identifying category winners, Zacks examined analysts' performances bank-by-bank. Tables indicate the best quarterly and annual earnings estimators for the biggest institutions in the American Banker's coverage universe.
In the quarterly analysis, Zacks calculated the average number of cents by which analysts' earnings estimates varied from actual results. The forecast errors were expressed as positive, or absolute, values, preventing sequential over- and under-estimates from offsetting each other.
Individual performances could have been normalized using the process described above for the category winners. We apologize to readers that this additional step was not taken.
In the annual analysis, Zacks calculated the percentage of each analyst's earnings estimates proving more accurate than the consensus.
One drawback to this approach is that although the study identifies proportionately how often an analyst does better or worse than average, it does not identify by how much the scores diverged from the average. Also, the introduction of a third type of score diminishes the comparability of all the data in this special report.
Readers will note that peer averages published in the annual rankings vary markedly, rarely hitting close to the intuitive ratio of 50%. There are several reasons for this:
A concentration of highly accurate or highly erroneous estimates can severely skew the average forecast error for an institution. In such cases, far more than half of a field of analysts will perform either better or worse than the average.
And in some cases - particularly with institutions showing earnings volatility - the majority of analysts may perform inconsistently. That lowers the average success rate for members of the peer group.
Finally, an unknown number of the consensus errors (or benchmarks) include scores from analysts whose individual standings weren't included in the Zacks study. For example, analysts not submitting a minimum number of estimates were excluded from the rankings, but the estimates they did publish were included in the calculation of consensus errors.