Executive Order on Artificial Intelligence: What Does It Mean for Mortgage Lenders?

On October 30, President Biden issued an executive order encouraging federal agencies to take action to ensure the safe, secure, and trustworthy development and use of artificial intelligence (the “Order”).  While some of the Order’s directives are broad and apply generally to federal agencies — such as the directive to “protect American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI” — other directives are more targeted and relate to specific industries.   

With respect to mortgage lending, the Order contains specific guidance for the Federal Housing Finance Agency (FHFA), Department of Housing and Urban Development (HUD), and Consumer Financial Protection Bureau (CFPB).  It is no secret that mortgage regulators are currently prioritizing fair lending regulation and enforcement, so it comes as no surprise that the Order’s mortgage- and housing-specific directives all relate to discrimination.

More specifically, Section 7.3 of the Order, “Strengthening AI and Civil Rights in the Broader Economy,” encourages the CFPB, FHFA, and HUD to do the following:

  • Require regulated entities to evaluate underwriting models and automated valuation models (AVMs) for bias and disparities affecting protected classes; and

  • Issue guidance explaining how the fair lending laws apply to algorithmic advertising and providing best practices to avoid violations of federal law.

Overview of CFPB Guidance on These Issues

The Executive Order is consistent with prior CFPB guidance expressing the risks for bias and discrimination in the use of AI for mortgage underwriting, AVMs, and digital marketing:

  • Mortgage Underwriting.  The CFPB in April 2023 warned of the potential for algorithmic underwriting models to make credit decisions on a prohibited basis in violation of applicable fair lending laws.  Most recently, in September 2023, the CFPB provided additional guidance explaining that the use of underwriting AI does not excuse lenders from identifying a sufficiently-specific reason for denial in adverse action notices.

  • AVMs.   The CFPB similarly has warned of bias and discrimination risk in AVMs and expressed concern that AVMs could perpetuate systemic discriminatory appraisal practices.  In June 2023, the CFPB, FHFA, and other agencies proposed a rule that will impose quality control standards when using AVMs.  The rule is designed in part to address the risk of bias in AVMs and will apply to lenders that rely on AVMs for a valuation determination and secondary market participants that rely on AVMs to grant appraisal waivers (e.g., Fannie PIWs and Freddie ACEs).

  • Algorithmic Marketing.  The CFPB is concerned that algorithmic marketing will lead to “digital redlining”.  More specifically, marketing algorithms used to determine the recipients of digital advertisements may contain bias that excludes protected classes from viewing the advertisements.  This type of algorithmic marketing was the subject of a DOJ settlement announced in 2022 alleging that a social media company’s advertisement delivery system violated the Fair Housing Act.  With the DOJ now announcing new redlining enforcement actions almost monthly — actions that are premised in part on lenders’ alleged failure to advertise sufficiently in majority-minority census tracts — digital redlining risk should be appropriately managed and monitored by lenders using algorithmic marketing.

How Lenders Can Mitigate AI Risk

Lenders that are using AI should consider the following risk mitigation activities:

  • Perform Testing.  AI models should be tested frequently, with results documented, to ensure models are performing as intended and not inadvertently excluding groups on a prohibited basis. 

  • Create Written Policies and Procedures.  Lenders should develop written policies and procedures describing testing procedures and explaining how the company uses AI in a compliant manner.

  • Ensure There Is a “Human in the Loop.”  Human oversight of AI models is critical to accuracy, safety, and efficiency.  This is particularly the case when a new AI model is introduced into the company and there is no proven track record for how the model performs.

  • Document How the Model Works.  Lenders must be able to explain to regulators how their AI models were built and how they operate.  The CFPB has made clear that black-box models are not going to get a pass from regulators.

  • Manage Vendors.  When using vendor-provided AI solutions, lenders are still on the hook for compliance.  Accordingly, lenders should ensure that they are still able to accomplish all the above when engaging AI vendors. 

If you need any assistance with implementing AI in a legal and compliant manner or otherwise managing fair lending risk, we are happy to assist.

contact our legal team.

 

About The Author

 

Matt Jones is an attorney at Mitchell Sandler with extensive experience advising mortgage lenders and other consumer finance companies in a variety of regulatory compliance, enforcement, and litigation matters. Connect with Matt Jones

 
 

SIGN UP FOR UPDATES

Never miss our news, insights or events.

FEATURED NEWS

Previous
Previous

CFPB Circular on Comparison Shopping Tools

Next
Next

Law360: Mitchell Sandler’s Latest Hire Is Veteran Fed Atty