In my role doing audits and AML model validations, I've only heard this question in passing from a few clients here and there. I haven't heard of an examiner focus in this area. Thankfully. I say thankfully for a reason - the technology is misunderstood. What vendors refer to as "AI" is just capitalizing on the buzzword - but I'll acknowledge that the largest of institutions are likely experimenting with those vendors would think of as "AI", which may have been the reason for the interagency "Joint Statement on Innovative Efforts."
Last edited by Pat Patriot Act; 05/03/19 09:38 PM.
I put AI in quotes because it's a moving target. There's a saying, Tesler's Theorem - "AI is whatever hasn't been done yet." Why is that an important quote? Because folks in BSA/AML are getting sold on the concept that they don't already have AI, when that's not the case. All AML and OFAC systems - from the worst to the best - have "AI".
Think about an AML system "learning" the expected activity of a customer by computing the average over a long timeframe, then creating an alert for a deviation from what the system has "learned" is "normal." And doing that for all customers overnight. Pretty standard, right? Then imagine doing the same thing in the 1970's. Tell me that's not AI!
When asked this question, I respond with another question. Forget what "AI" means for a second and also forget your current system constraints - what would you like to have a "Robot AML Investigator" do for you that's not being done already? How many of those things do you think you'd get criticized for not doing or having a human do? And what's the cost-benefit? Those are the questions you need to ask when you consider what you really need from the slick marketing of certain programs or vendors touting "AI."
Here are a few "AI" functions that I predict could realistically be in place at small to mid-sized FI's within the next 20 years:
- Negative news/media search (nightly - entire customer and transaction counterparty base)
- Private ATM and MSB licensing checks (nightly - entire customer)
- SAR narrative and form initial completion (will still need human review/approval)
- EDD narrative and form initial completion (will still need human review/approval)
- Using natural language processing, big data, and machine learning to generate *more* alerts with a focus on qualitative data (who, what, and why) instead of the mostly quantitative models we have right now
I don't foresee there being some magical program that does these functions without having tangible documented algorithms. They'll just be possible because there will be more data, better data, and more creative applications of data science to the existing AML space.
I validate alot of systems, and none of them are doing those things yet. One area that I think is further off, is alert and SAR decision-making. The problem is that, with machine learning, someone has to "train" the AI Agent. Bad training equals bad results - see what happened with Microsoft and Tay if you want a real-life example of what I mean.