3.9 Article

Time to Assess Bias in Machine Learning Models for Credit Decisions

期刊

出版社

MDPI
DOI: 10.3390/jrfm15040165

关键词

ML; algorithm; fair lending; disparate; bias; discrimination

资金

  1. Ally Financial

向作者/读者索取更多资源

Focus on fair lending has increased recently as bank and non-bank lenders adopt AI-based credit decision methods. However, the transparency and explainability of AI and ML techniques may raise unique concerns about compliance with fair lending laws compared to traditional regression models. Nonetheless, ML technology can reduce discretionary and subjective decisions, thus reducing the potential for discrimination. Therefore, as financial institutions explore ML applications in loan underwriting and pricing, fair lending assessments will continue to evolve and require additional evaluations.
Focus on fair lending has become more intensified recently as bank and non-bank lenders apply artificial-intelligence (AI)-based credit determination approaches. The data analytics technique behind AI and machine learning (ML) has proven to be powerful in many application areas. However, ML can be less transparent and explainable than traditional regression models, which may raise unique questions about its compliance with fair lending laws. ML may also reduce potential for discrimination, by reducing discretionary and judgmental decisions. As financial institutions continue to explore ML applications in loan underwriting and pricing, the fair lending assessments typically led by compliance and legal functions will likely continue to evolve. In this paper, the author discusses unique considerations around ML in the existing fair lending risk assessment practice for underwriting and pricing models and proposes consideration of additional evaluations to be added in the present practice.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.9
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据