Risk Management & Risk Assessment
Risk has been a key term and management concept for the last decade, but we are still trying to understand what risk management means and to develop or refine techniques for risk management and assessment. Risk assessment is an essential component - if not the core component - of a risk management program.
Risk assessment is based on and builds the risk program. An effective program uses both risk analysis and risk assessment in a constant cycle. Risk analysis identifies sources of risk as they exist and as they emerge. Risk assessment evaluates and monitors the effectiveness of the compliance program and identifies emerging areas of risk or where risk management systems are weakening.
To begin the process, we must define risk and identify the sources of risk. Governor Bies of the Board of Governors of the Federal Reserve, states that compliance risk can be defined as the risk of legal or regulatory sanctions, financial loss or damage to an organization's reputation and franchise value. Using this definition, risk identification should account for regulatory violations together with consequences and the likelihood of examiner findings.
Risk should also consider the difficulty of tasks. For example, placing holds on checks can involve complex considerations that must be carried out in a short period of time. Meeting time requirements can be affected by where and how documents such as disclosures are generated.
Finally, any risk analysis must account for change that is not regulatory. This includes changes in economy and in your market as well as changes in product offerings, delivery techniques and even changes to the organization.
First, we look at inherent risk. What are the risks of certain products or activities before any controls are put into place? Risk is inherent in everything a financial institution does. The base level of any risk management program is inherent risk.
Risk is also a result of the likelihood of exposure. How often something will happen functions as a multiplier: more frequent means more risk. A high risk activity with a frequent occurrence is high times high equals very high. A high risk activity that occurs only rarely becomes high times low equals moderate or even low. Other factors affect the frequency formula. For example, the frequency of bank examinations increases the exposure level of banks while finance companies and mortgage bankers that are not examined have a much lower risk of exposure for the same risk.
The likelihood that a customer will identify a problem also increases the exposure level. If customers knew the definition of finance charge and how to calculate APRs, there would be a higher risk of restitution.
When you analyze your risk, use categories that are appropriate to your bank size and exposure. High, medium and low may be sufficient. More complex organizations may want more risk levels to analyze.
Also consider risk trends. Fifteen years ago, no-one paid a whole lot of attention to flood hazard insurance, CIP as we know it today hadn't been invented and nontraditional mortgages were only a gleam in someone's eye. Developments in weather damage, economic change, money laundering and identity theft have changed the risk levels of these issues.
Controls for Risk
When we see risk, we try to limit or control it. Controls either prevent or reduce the risk. After controls are in place, what remains is residual risk. Controls are essential to managing risk, and play a key role in determining the residual risk. The ultimate decision is whether the residual risk is a level that the institution can tolerate.
Today's risk management tools include placing accountability, using automation, dual controls, edit controls, second reviews and the always essential training. Together with monitoring and auditing, these controls can significantly reduce risk.
Of these tools, accountability and training form core pieces. Without either one, chaos ensues. Programs don't work because people don't take responsibility or don't know how to do their jobs. A risk management analysis and plan for a new product, service, or location should take into account the resources available for delivering. Who should be responsible for what parts or phases? Who needs to know about something and what do they need to know? How will this training be delivered?
Once a control is in place, monitoring enters the process. It is essential to monitor both the risk and the control. Part of monitoring is to determine how effectively the control is working. For example, many institutions use two HMDA software programs: one to collect HMDA data and one to report HMDA data. Monitoring should look closely at the accuracy of both programs and processes and identify errors in either program. It should also look for any errors in how the two programs share or translate data.
Other forms of monitoring include quality assurance or compliance monitoring directed at the risks identified. If your program relies on quality assurance, you should know precisely what QA looks at and whether it actually monitors compliance performance. Many Quality Assurance programs do not look at compliance issues such as collection and entry of HMDA data or whether all finance charges were properly identified and included in calculations.
All monitoring should produce exception reports. Monitoring should also track the volume of each activity and any trends. Both are measures of emerging risk. Reports
Effective reporting and follow-up is a core piece of any risk management program. The quality of the program also depends on the content of the reports. You should be tracking activities that are most likely to indicate risk, whether through slippage or changes.
Reports should go to management and the board on a regular schedule, based on the need for reporting to this level and the amount of risk the institution accepts. Findings, positive or negative, should be consistently reported. Also, show any trends. An improvement is always gratifying to report, but increases in problems are the essence of identifying and managing risk.
It is one of the frustrating ironies of compliance that the best way to prove a program works is to find (and correct) violations. Reports tell your story. The story you need to tell is the effectiveness of the program, so don't be afraid of findings. If the reports say nothing, you may be perfect, but it is more likely that your monitoring is missing risk indicators.
The program doesn't end with reporting. The reports generate a new cycle of assessing emerging risk and the effectiveness of risk management and controls.
Key Risk Indicators (KRI)
The risk indicators are the tools you use to measure performance and to identify problems. They are the elements of performance that can be quantified and monitored. They are also the elements of performance that will be most effective in flagging any problems.
Risk indicators are violations or exceptions to policy. It could be errors in disclosures or failure to deliver information on time. Each regulation or product has its own risk indicators.
In setting your key risk indicators, there are several sources available to you. First, there is your own analysis. You should know requirements and the work process well enough to identify what can go wrong.
Second, there is the word of the regulators. The agencies send clear signals about what they consider to be high and emerging risk. It is well worth your time to pay attention to these signals. For example, the nontraditional mortgage guidance provides what the agencies consider to be the key risk elements of those products. They've done the list for you.
Third, learn from others. Examination citations are indicators of what examiners look for and where other institutions have encountered problems. The number of penalties imposed for violations of BSA and Flood Hazard Insurance sends a clear signal that these areas should get careful attention.
Key Performance Indicators (KPI)
The elements of a risk management program sound logical enough. The tough question is how to make it work. This involves making use of your analysis of risk together with monitoring and analysis of performance.
Start with your key risk indicators (KRI). These are the elements you should be watching most closely. Your program should be designed to monitor and measure the effective management of risk for these indicators.
For each KRI, establish goals or benchmarks according to your institution's taste for risk. Perfection as a goal is a business decision. The indicator could be a number or percentage of exceptions. Or an indicator could be a function of both exceptions and volume.
The performance indicator will measure whether the institution stayed within its targets of risk. For example, one performance indicator could be that no exceptions or violations occur that could trigger monetary consequences such as restitution or penalties. In this context, the trends in flood insurance enforcement actions illustrate how risk can change. Ten years ago, flood insurance violations were a write up but not much more unless the problem was egregious. Now, examiners look harder and more effectively than they used to and findings trigger mandatory civil money penalties. Although compliance requirements have not changed significantly during the past decade, the risk attached to violations has risen dramatically.
Performance indicators also show trends and trends can indicate emerging risk. Trends should be given careful attention because they can indicate that changes are needed in your risk management program. Trends can occur because people have forgotten their training, circumstances have changed, or the product or delivery system has changed. Trends tell you what is and what is not working.
- Look at your risk management program. Does it have clear goals and key performance indicators?
- Compare your risk management program to your institution's strategic plan. Do they match or support each other?
- Identify your key players in risk management, including those who are most likely to see any failures or changes in the program. These people should be part of the risk management team.
Copyright © 2007 Compliance Action. Originally appeared in Compliance Action, Vol. 12, No. 5, 4/07
First published on 04/01/2007