There are no exact rules about what's right and wrong in the area of validating detection parameters. But on a high level, you should start by taking an inventory of the red flags for suspicious activity identified by the FFIEC, FinCEN, and FATF. If you have enough coverage to detect most or all of the red flags, then you can start working on documenting your methodology.
Last edited by patsfan; 06/01/15 11:46 AM.
When it comes to defending why you chose X instead of Y, it starts by documenting the objective of each "rule" or scenario.
For example, you may describe the intended purpose of Rule A is to catch structured cash deposits happening via consistent deposits just under the threshold. In a rule like this, you're determining if a customer's behavior matches up with a specific pattern, rather than determining if a customer's activity is abnormal based on their past and profile. As such, the best means of system tuning is to compare your rule to your Bank's SARs and investigations. For instance, say your have a rule that triggers an alert for 3 deposits between $5-10k in 15 days (filtering out customers with single deposits greater than $10k in 15 days). You compared the alerts results to SAR filings, and saw an alert to SAR ratio of 1% and determined that it needed tweaking. Your next step would be to review the SARs to see if it DID link up with. Lets say you reviewed all of those SARs and determined that the minimum dollar amount you included within these "long term" patterns of structuring was $8,000. As such, you could then justify increasing the "structuring" range from $5-10k to $8-10k. If you were looking to further protect your decision, you could run the rule in a test environment and prove out that you would not have had a "false negative" on any past SAR your system caught.
Conversely, if you had rules based on what's considered "normal" for customers, then you could perform an analysis of transaction tendencies. You don't need a PHD for this, but I'd suggest familiarizing yourself with statistics to make sure you develop an appropriate base methodology for analyzing and setting thresholds. For instance, if you're looking to set the right monthly dollar volume threshold for businesses making cash withdrawals, you'd want to break down their activity by month as a population to analyze. From there, you would want to start by assessing the "type" of distribution the data fits. For example, is it a "normal" distribution? If so, then well known central moment stats like mean and standard deviations are acceptable. One common approach I've seen in ACAMS Today is to set the "excessive" threshold at 3 standard deviations from the mean, which has been a fundamental approach in the Statistics world for identifying anomalies. If your data does NOT fit a normal distribution (often the case), proceed with caution. You may be better served with using the median, percentiles, etc. The 99.7th percentile technically lines up with 3 standard deviations, but in my experience, goes to high. I tend to like using the 75th percentile for setting "excessive" thresholds with non-normal data. But remember - use common sense! As the old saying goes, not everything that can be counted counts, and not everything that counts can counted.
PrimeTim's suggestion is also a good idea for an ongoing "model governance" control (trump it up in your procedures!) I'd also say use the same approach for any non-system notification you get to trigger a review or investigation. For example, if you get a grand jury and find suspicious activity that didn't pop in the software, see if you have a weakness to be patched. Some thing with internal reports, etc. Always BOLO for the system missing stuff.