Share

What is the Federal Reserve Learning About Artificial Intelligence?

Lael Brainard, a member of the Board of Governors of the Federal Reserve System, recently gave a speech titled What Are We Learning about Artificial Intelligence in Financial Services? Before I tell you what her answers were, I'll tell you what I think they're learning: Not enough.

Brainard started her speech with the now-customary breathtaking "OMG, AI and data are transforming the world at an alarming rate!" intro that, apparently, all speakers are required to make these days when talking about AI. She followed that by citing a Financial Stability Board report which identified four areas where AI could impact banking:

  1. Combine expanded consumer data sets with new algorithms to assess credit quality, price insurance policies, or provide financial advice to consumers through chatbots.
  2. Strengthen back-office operations, such as advanced models for capital optimization, model risk management, stress testing, and market impact analysis.
  3. Enhance trading and investment strategies, by identifying new signals on price movements to using past trading behavior to anticipate a client’s next order.
  4. Advance compliance and risk mitigation by banks in areas like fraud detection, capital optimization, and portfolio management.

The challenge as Brainard sees it:

"The potential breadth and power of these new AI applications inevitably raise questions about potential risks to bank safety and soundness, consumer protection, or the financial system. The question, then, is how should we approach regulation and supervision? It is incumbent on regulators to review the potential consequences of AI, including the possible risks, and take a balanced view about its use by supervised firms."

Brainard then took an interesting tack by stating that "existing regulatory and supervisory guardrails are a good place to start as we assess the appropriate approach for AI processes." Specifically, she referenced:

  • The National Science and Technology Council. In a study addressing regulatory activity generally, the NSTC concluded that if an AI-related risk “falls within the bounds of an existing regulatory regime, the policy discussion should start by considering whether the existing regulations already adequately address the risk, or whether they need to be adapted to the addition of AI.”
  • The Fed’s guidance on model risk management (SR Letter 11-7). This policy establishes the “effective challenge” of models by a “second set of eyes”--unbiased, qualified individuals separated from an AI model’s development, implementation, and use. Brainard stated "if reviewers are unable to evaluate a model in full or if they identify issues, they might recommend the model be used with greater caution or with compensating controls."
  • The Fed's guidance on vendor risk management (SR 13-19/CA 13-21). Brainard notes that "the vast majority of the banks that we supervise will have to rely on the expertise, data, and off-the-shelf AI tools of nonbank vendors to take advantage of AI-powered processes." The vendor risk-management guidance discusses best practices for banks regarding due diligence, selection, and contracting processes in selecting an outside vendor.

After citing some potential benefits of AI, Brainard did acknowledge a drawback:

"It was recently reported that a large employer attempted to develop an AI hiring tool for software developers that was trained with a data set of the resumes of past successful hires, which it later abandoned. Because the pool of previously hired software developers in the training data set was overwhelmingly male, the AI developed a bias against female applicants, going so far as to exclude resumes of graduates from two women’s colleges."

[Not for nothing, but that "large employer" was the same one that New York and Virginia just gave huge taxpayer-funded tax breaks to in exchange for opening new headquarters in those states]

Brainard concluded by "looking ahead" and stating:

"Perhaps one of the most important early lessons is that not all potential consequences are knowable now--firms should be continually vigilant for new issues in the rapidly evolving area of AI. Firms should not assume that AI approaches are less susceptible to problems because they are purported to be able to “learn” or less prone to human error."

If this is all that the Fed has learned about artificial intelligence, the banking industry is in for a lot of headaches.   

There are four issues that Ms. Brainard, and presumably the Fed, are failing to take into account:

  • It's all about the data. The example of Amazon (i.e., the "large employer) using a sexist AI program to evaluate job candidates can't be identified or prevented by evaluating the use of AI. As Brainard even alludes to, it's all about the data. Asking regulators or even bank compliance officers to proactively identify potential bias in datasets isn't always going to be as easy as identifying gender discrepancies. Moreover, the whole premise of AI is that it's a "learning system"--as more data is gathered and processed, algorithms are updated. This means that any point in time, the potential impact--good or bad--of an AI system is dependent on the amount and quality of the data that the AI system is processing. How the hell do you regulate that? How are the Fed's current policies on risk and vendor management any help or guideline for doing that?
  • Proactive reviews are impossible. If, as Ms. Brainard says, "not all consequences are knowable," then regulatory and compliance reviews can only be done retroactively. This means that banks will pass reviews in one year for what they're doing, and then get penalized years later when some regulator determines that an AI algorithm produced a negative impact.
  • Embedded AI. Brainard's continued references to AI simplistically treats the technology as if it's a standalone technology. For sure, there are standalone AI systems like chatbots. But increasingly, AI tools and technologies will become embedded in existing apps and systems, and integrated to the point where AI-related code and algorithms may comprise 10% of the system or 90% of the system. In other AI will be embedded in many bank systems. How the hell do you regulate AI in that scenario?
  • Regulatory and compliance skill sets. According to a study from the Tencent Research Institute, there is market demand for millions of AI researchers and practitioners worldwide, but just 300,000 people with the right skill sets. That's why these folks make the big bucks, people. And it's why it's foolish to think that the Fed and state regulatory bodies will be able to employ AI-knowledgeable people for regulatory and compliance purposes. In other words, the regulatory approaches Ms. Brainard is calling for are unenforceable.

As I said at the start of this post, if Brainard and the Fed want to ask what are they learning about AI, the answer is: Not enough.

Ron Shevlin
Director of Research
Cornerstone Advisors

Share