rule settings and parameters validation

Posted By: Trees

rule settings and parameters validation - 05/28/15 09:33 PM

The new manual contemplates a validation of the rules, parameters, and settings to ensure they are in line with the bank's transactions.

Anyone hear of any webinars (hint hint BOL) that cover what banks may do to fulfill this requirement. For example, if you are rule based, maybe take a 12 month history of rules and matches and false positives and make a determination whether or not the rule needs to be adjusted/deleted, etc. We feel pretty good about our rules but its not easy when you have to defend why you chose X instead of Y after you have used the rules for so long. We spend countless hours refining us and have done so over time but, well, we would sure like to have some help on putting the whole validation "report" together. Thanks!
Posted By: PrimeTime

Re: rule settings and parameters validation - 05/29/15 01:01 PM

Just kind of spitballing here, but what we've started to do is do a SAR-based rule validation. What I mean by that is that if a SAR is filed that didn't hit our rules, we try to identify a way in which we can create a new rule or modify an existing rule to catch the activity if it were to happen again. Obviously some types of activity are absolutely impossible to be caught by rules (electronic intrusion, for example), but I'd say that if you modify your rules to match the activities that have warranted SAR filings from your institution that it would validate the efficiency of your monitoring system to some degree.
Posted By: Pat Patriot Act

Re: rule settings and parameters validation - 06/01/15 11:45 AM

There are no exact rules about what's right and wrong in the area of validating detection parameters. But on a high level, you should start by taking an inventory of the red flags for suspicious activity identified by the FFIEC, FinCEN, and FATF. If you have enough coverage to detect most or all of the red flags, then you can start working on documenting your methodology.

When it comes to defending why you chose X instead of Y, it starts by documenting the objective of each "rule" or scenario.

For example, you may describe the intended purpose of Rule A is to catch structured cash deposits happening via consistent deposits just under the threshold. In a rule like this, you're determining if a customer's behavior matches up with a specific pattern, rather than determining if a customer's activity is abnormal based on their past and profile. As such, the best means of system tuning is to compare your rule to your Bank's SARs and investigations. For instance, say your have a rule that triggers an alert for 3 deposits between $5-10k in 15 days (filtering out customers with single deposits greater than $10k in 15 days). You compared the alerts results to SAR filings, and saw an alert to SAR ratio of 1% and determined that it needed tweaking. Your next step would be to review the SARs to see if it DID link up with. Lets say you reviewed all of those SARs and determined that the minimum dollar amount you included within these "long term" patterns of structuring was $8,000. As such, you could then justify increasing the "structuring" range from $5-10k to $8-10k. If you were looking to further protect your decision, you could run the rule in a test environment and prove out that you would not have had a "false negative" on any past SAR your system caught.

Conversely, if you had rules based on what's considered "normal" for customers, then you could perform an analysis of transaction tendencies. You don't need a PHD for this, but I'd suggest familiarizing yourself with statistics to make sure you develop an appropriate base methodology for analyzing and setting thresholds. For instance, if you're looking to set the right monthly dollar volume threshold for businesses making cash withdrawals, you'd want to break down their activity by month as a population to analyze. From there, you would want to start by assessing the "type" of distribution the data fits. For example, is it a "normal" distribution? If so, then well known central moment stats like mean and standard deviations are acceptable. One common approach I've seen in ACAMS Today is to set the "excessive" threshold at 3 standard deviations from the mean, which has been a fundamental approach in the Statistics world for identifying anomalies. If your data does NOT fit a normal distribution (often the case), proceed with caution. You may be better served with using the median, percentiles, etc. The 99.7th percentile technically lines up with 3 standard deviations, but in my experience, goes to high. I tend to like using the 75th percentile for setting "excessive" thresholds with non-normal data. But remember - use common sense! As the old saying goes, not everything that can be counted counts, and not everything that counts can counted.

PrimeTim's suggestion is also a good idea for an ongoing "model governance" control (trump it up in your procedures!) I'd also say use the same approach for any non-system notification you get to trigger a review or investigation. For example, if you get a grand jury and find suspicious activity that didn't pop in the software, see if you have a weakness to be patched. Some thing with internal reports, etc. Always BOLO for the system missing stuff.
Posted By: BrianC

Re: rule settings and parameters validation - 06/23/15 02:15 PM

Trees, BOL obviously heard you. I will be presenting Automated Systems Validation: From Implementation to Validation on August 20th.
Posted By: Trees

Re: rule settings and parameters validation - 06/24/15 01:36 PM

Thanks, folks. Great timing, Brian!