The Retail Lending Test Info Graphic creates an oversimplified version of all the steps and calculations that must be done. It creates the impression that only a few steps need to be taken to compute a bank's performance. But it omits some very important steps and calculations. For example, the first thing a bank must determine is what qualifies as a major product line which can vary from market to market. The graphic assumes that already has been done. The graphic also glosses over the fact that it depicts computations for only a single Assessment Area. The computations must be repeated for each and every facility-based and retail-lending assessment area and then for potentially hundreds of "outside retail lending areas" to be computed at the "institution level". It also omits that results must be accumulated at the state level and multi-state MSA level before getting to the "Institution Level".
Even the steps depicted in the graphic appear to be misleadingly simple. In the example, it is assumed that there are 6 major retail lending product lines. So step 1 involves (a) gathering the benchmarks, (b) computing the calibrated conclusion ranges, and then (c) comparing a bank's penetration rates to the "calibrated" standards. This means 4 comparisons (low- and moderate-income borrower metrics and low- and moderate-income geographic metrics for consumer loans as well as borrower and geographic metrics for small business and small farm loans) for each of the 5 major product lines and 2 comparisons (only LMI geographics) for the multifamily mortgage line. That adds up to 22 determinations of "conclusions" each of which must then be converted to a "score" (22 more comparisons). Having computed a score for each major product line for each metric, each score must then be weighted by a demographic variable (22 more computations). Then the weighted scores for the borrower and geographic metric must be converted to a simple average product line score for the 5 major product lines each by taking the weighted geographic and a weighted borrower score and applying a simple average (the multifamily mortgage score is done only at the geographic level so there is no need to compute a simple average. The next step would be to take all the simple average product line scores (6 in the example) and weigh them by applying the relative volumes (based on dollars ) in a given assessment area. This means 6 more computations to arrive at the score (another computation) that will be the basis for converting (another computation by comparing to the table "Conclusions derived from scores" in the example) to the "Recommended Retail Lending Test Conclusion" for all the bank's major product lines in a given assessment area.
Can someone add up all the steps and calculations just for a single assessment area?
And the foregoing must be repeated for potentially hundreds of assessment areas and then for any MSA or statewide non-MSA in the entire country that is outside the retail lending areas where a bank extends any major product line loans (no minimum threshold). This last step could involve hundreds of computations accumulated to the "institution level".
This all means that even modestly large banks may have thousands and thousands of computations.
Now add to the foregoing that the computations are to be done on a consolidated basis over the "evaluation period" which could be any number of years. Then consider what happens when demographics change in the middle of an evaluation period!
How can any bank manage its CRA risk under such a system?
_________________________
CRA Exam Preparation, CRA Performance Evaluations, Key Performance Benchmarks, & maps