Skip to content
International Journal of Financial Innovations & Risk Management

International Journal of Financial Innovations & Risk Management

IJFIRM

  • About the Journal
    • Our Vision
    • Editorial Board
    • Ethics & Disclosure Policy
    • Contact the Journal
    • Editorial Policies
  • For Authors
    • Submission Guidelines
    • Pre-Submission Checklist
    • Journal Policies/Procedure
    • Rights and Permissions
    • How to Publish With IJFIRM
    • Latest Focus Areas
    • Fees & Funding
  • Call for Paper
  • Online Submission
  • Contact Us
  • Archive
  • Toggle search form

Investigating the Risks of Algorithmic Bias and Explainability Failures in Credit Risk Models

International Journal of Financial Innovations & Risk Management (IJFIRM)
2025 – Volume 1 – Issue 1 – Pages 57–81

Authors:

Imran Hussain Shah 1, Shahid Khan 2, Sehat Khan 3,

1. University of Lahore, Lahore, Pakistan

2. University of Lahore, Lahore, Pakistan

3. University of Lahore, Lahore, Pakistan

Abstract

This study examines the risks of algorithmic bias and explainability failures in AI-driven credit risk models, focusing on how these issues impact fairness, transparency, and regulatory compliance in financial institutions. It investigates whether current explainability tools and governance mechanisms are sufficient to ensure ethical and accountable decision-making in credit scoring. A mixed-methods design was adopted, integrating machine learning experimentation with fairness and explainability metrics alongside semi-structured interviews with credit risk officers, compliance specialists, and AI practitioners. Quantitative analysis used models such as logistic regression, random forests, and XG Boost, trained on credit risk datasets and evaluated using disparate impact ratios, equal opportunity measures, and SHAP/LIME interpretability tools. Qualitative insights were gathered to contextualize technical findings and assess institutional practices. Results show that while advanced models like XG Boost achieve higher predictive accuracy, they also amplify bias, particularly against protected groups such as younger applicants and foreign workers. Logistic regression provided fairer outcomes but with lower predictive power. Explainability tools such as SHAP and LIME improved model transparency but often failed to deliver accessible explanations for non-technical users. Interviews revealed widespread practitioner concerns regarding regulatory ambiguity, insufficient governance structures, and gaps between technical explainability and compliance requirements. The findings highlight the urgent need for fairness-aware machine learning, systematic bias audits, and stakeholder-oriented explainability frameworks in financial institutions. Regulators must set clearer thresholds for acceptable bias and explainability standards, while institutions should embed fairness and interpretability into model development and governance. Implementing these practices will reduce compliance risks under frameworks such as the EU AI Act, GDPR, and ECOA, while also strengthening consumer trust in digital lending.

Keywords

Algorithmic Bias, Explainability, Credit Scoring Models, Machine Learning in Finance, Fairness in AI, Model Risk Management, Financial Regulation, Responsible AI, Discrimination in Lending

JEL Code: D14, D91, G41, G53

References:

  1. Barocas, S., Hardt, M. and Narayanan, A., 2019. Fairness and machine learning: Limitations and opportunities. Cambridge: fairmlbook.org.
  2. Binns, R., 2018. Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability and Transparency, pp.149–159.
  3. Doshi-Velez, F. and Kim, B., 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
  4. Goodman, B. and Flaxman, S., 2017. European Union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine, 38(3), pp.50–57.
  5. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K. and Galstyan, A., 2021. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6), pp.1–35.
  6. Ribeiro, M.T., Singh, S. and Guestrin, C., 2016. “Why should I trust you?”: Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD, pp.1135–1144.
  7. Shapley, L.S., 1953. A value for n-person games. Contributions to the Theory of Games, 2(28), pp.307–317.
  8. Wachter, S., Mittelstadt, B. and Floridi, L., 2017. Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), pp.76–99.
  9. Zliobaite, I., 2017. Measuring discrimination in algorithmic decision making. Data Mining and Knowledge Discovery, 31(4), pp.1060–1089.
  1. Berk, R., Heidari, H., Jabbari, S., Kearns, M. and Roth, A., 2018. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research, 50(1), pp.3–44.
  2. Kamiran, F. and Calders, T., 2012. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), pp.1–33.
  3. Kleinberg, J., Mullainathan, S. and Raghavan, M., 2017. Inherent trade-offs in the fair determination of risk scores. Proceedings of Innovations in Theoretical Computer Science (ITCS), pp.1–23.
  4. Lou, Y., Caruana, R. and Gehrke, J., 2012. Intelligible models for classification and regression. Proceedings of the 18th ACM SIGKDD, pp.150–158.
  5. Rudin, C., 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), pp.206–215.
  6. Suresh, H. and Guttag, J.V., 2021. A framework for understanding unintended consequences of machine learning. Communications of the ACM, 64(2), pp.62–71.
  7. Ustun, B. and Rudin, C., 2019. Learning optimized risk scores. Journal of Machine Learning Research, 20(1), pp.1–75.
  8. Wang, Q. and Kogan, A., 2018. Designing explainable AI for decision support in fraud detection. Decision Support Systems, 115, pp.1–14.
  9. Holzinger, A., Biemann, C., Pattichis, C.S. and Kell, D.B., 2017. What do we need to build explainable AI systems for the medical domain? Review in Artificial Intelligence in Medicine, 7(1), pp.13–21.
  10. Adadi, A. and Berrada, M., 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, pp.52138–52160.
  11. Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. NeurIPS, 9505–9515.
  12. Aggarwal, C. C. (2020). Neural networks and deep learning: A textbook. Springer.
  13. Babic, B., Gerke, S., Evgeniou, T., & Cohen, I. G. (2021). Beware explanations from AI in health care. Science, 373(6552), 284–286.
  14. Bandyopadhyay, S., & Dutta, S. (2022). Explainable AI in credit decisioning. Expert Systems with Applications, 195, 116606.
  15. Barredo Arrieta, A., et al. (2020). Explainable AI (XAI): Concepts, taxonomies, opportunities. Information Fusion, 58, 82–115.
  16. Bellamy, R. K. E., et al. (2019). AI fairness 360 toolkit. IBM Journal of Research, 63(4).
  17. Berk, R. (2021). Artificial intelligence and algorithmic fairness. Annual Review of Statistics, 8, 295–313.
  18. Bhatt, U., et al. (2020). Explainable machine learning in deployment. FAT Conference.
  19. Çolak, G., & Whited, T. (2022). Corporate finance with ML. Review of Financial Studies, 35(3), 1225–1273.
  20. Du, M., Liu, N., & Hu, X. (2019). Techniques for explainable ML. Communications of the ACM, 63(1), 68–77.
  21.  Fuster, A., Goldsmith‐Pinkham, P., Ramadorai, T., & Walther, A. (2019). Predictably unequal? Journal of Finance, 75(1), 331–368.
  22.  Guidotti, R., et al. (2018). A survey of explanation methods. ACM Computing Surveys, 51(5), 93.
  23.  Hajian, S., Domingo-Ferrer, J., & Martínez-Ballesté, A. (2011). Discrimination prevention. Data Mining and Knowledge Discovery, 23(3), 417–450.
  24. Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity. NeurIPS, 3315–3323.
  25.  Khandani, A. E., Kim, A. J., & Lo, A. W. (2010). Consumer credit-risk models. Journal of Banking & Finance, 34(11), 2767–2787.
  26.  Lecue, F. (2020). On the role of XAI in finance. IEEE Intelligent Systems, 35(4), 84–89.
  27.  Lipton, Z. C. (2016). The mythos of model interpretability. Queue, 16(3), 31–57.
  28.  Molnar, C. (2022). Interpretable machine learning (2nd ed.). CRC Press.
  29.  Pasquale, F. (2015). The black box society. Harvard University Press.
  30. Poursabzi-Sangdeh, F., et al. (2021). Manipulating model transparency. CHI Conference.
  31. Prince, V., & Schwarz, D. (2020). FAccT and financial regulation. European Banking Institute Working Paper.
  32. Raghavan, M., et al. (2020). Mitigating bias in ML. Proceedings of FAT, 469–481.
  33. Rudin, C., & Radin, J. (2019). Why are AI models black boxes? Nature Machine Intelligence, 1, 206–215.
  34. Schumann, C., et al. (2019). Measuring bias with confidence. FAT, 184–193.
  35. Shankarapani, M., et al. (2021). AI explainability in FinTech risk systems. IEEE Access, 9, 117456–117470.
  36. Shin, D. (2021). User perceptions of XAI fairness. Computers in Human Behavior, 120, 106792.
  37. Spindler, P., et al. (2022). Explainable credit scoring. Journal of Financial Data Science, 4(2), 44–58.
  38.  Tan, S., et al. (2021). Fairness-aware ML in lending. Decision Support Systems, 143, 113496.
  39. Uddin, M. P., et al. (2020). Interpretable AI-based credit scoring. Applied Soft Computing, 97, 106842.
  40. Zarsky, T. (2016). The trouble with algorithmic decisions. Science, Technology & Human Values, 41(1), 118–132.

Copyright © 2025 IJFIRM

Powered by PressBook Blog WordPress theme

Powered by
...
►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None
Powered by