Artificial Intelligence/Machine Learning (AI/ML) models are not new concepts to either the financial industry or regulators. While regulators have been focused on implications of the use of AI/ML for years, they have recently been increasing their scrutiny on the use of such models in credit decisions:
- In March 2021, the Office of the Comptroller of the Currency (OCC) issued a request for information1 on financial institutions' use of AI/ML models, to better understand the use of such models in the provision of financial services
- In August 2021, the OCC incorporated in its Model Risk Management Handbook2, guidelines for bank examiners around the use of AI/ML models in such institutions.
- In October 2021, the Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the OCC jointly announced the new “Combatting Redlining Initiative,”3 where CFPB Director Rohit Chopra emphasized “digital and algorithmic redlining” and noted that the CFPB would be “closely watching for digital redlining, disguised through so-called neutral algorithms”4 as they may undermine the goal of maintaining “equal opportunities” for the society
- In a related topic, the CFPB Director highlighted the need to build controls around how personal data is consumed by business models5, and the CFPB would support “the Federal Trade Commission in its work to monitor business models that rely on harvesting and monetizing personal data.”
The “Combatting Redlining Initiative” combines the individual agencies’ focus on fair lending in a coordinated enforcement setting. The initiative refers to both depository and non-depository institutions and highlights the role of non-depository lenders in the mortgage space. While this particular initiative focuses on lending activities in the mortgage space, the regulators continue to focus on potential fair-lending violations across a broader range of lending, payment, and servicing activities and asset types such as property assessed clean energy programs, student loans, and small business loans, among others.
Implications for Lending/Servicing Institutions
Some lenders and servicers have recently undergone or are undergoing fair lending-related examinations, while others are responding to inquiries and investigations, which generally include receiving data requests from the CFPB. Similarly, the CFPB also inquired6 about information from big payment platforms, as well as buy now, pay later providers. These requests were significant in scope, and to the extent that such data is consumed by AI/ML models, there could be regulatory concerns associated with “data harvesting” and data mining in credit decision-making.
Institutions typically translate the information collected during the application process into data, which is then consumed by underwriting and pricing models. However, such data may contain errors and biases against specific consumer groups. Furthermore, institutions may not have clear rules around the data fields consumed by the AI/ML models, which have recently claimed a material presence in model inventories.
In light of the recent regulatory literature, Guidehouse expects that:
- There will be continued interest by regulators in potential discrimination and unfair lending practices
- Lending institutions (both depository and non-depository) will increasingly receive requests for data that are consumed by AI/ML credit models
- Regulators will bring an increasing number of enforcement cases on lending institutions
Institutions should consider the following actions:
- Establish exploratory tools, such as data visualization in the form of univariate and bivariate charts, to identify gaps and improve data quality and accuracy for data that is consumed by models employed in decision-making, including AI/ML models
- Ensure that their systems can collect the necessary data when preparing for, for example, small business lending data collection rulemaking (Section 1071 of the Dodd-Frank Act) and establish controls to verify the accuracy of data collecting during the application process
- Develop sound model governance and documentation for transparency
- Test and address for potential bias in AI/ML models
- Perform a qualitative check for potentially discriminatory credit decision7 outcomes and evaluate the method, such as matched-pair analysis or another sound qualitative assessment of credit decisions, for ease of understanding
How Guidehouse Can Help
Institutions should focus on providing open access to credit and fair lending/services practices. They should revisit their lending models—including both underwriting and pricing—and evaluate whether models are prone to producing disparate outcomes for disadvantaged consumers/protected classes. Lending institutions that recognize the requirements and expectations of the evolving regulatory landscape are likely to benefit from being proactive and assessing their lending practices with a holistic approach.
Guidehouse offers customized and unique solutions to assist institutions with:
- Identifying models that may potentially be subject to fair lending rules and assessing AI/ML models within a fair lending risk framework
- Following the academic and practitioner literature to generate race/gender proxies and evaluating the potential correlation with other data fields that the AI/ML models might be using to infer such information
- While testing for statistical significance, putting an equal emphasis on economic significance and creating actionable insights