AI, credit risk management and data protection
The use of artificial intelligence is a burgeoning reality in European banking. However, its use in credit risk analysis is subject to especially restrictive regulation.
Artificial intelligence (AI) can be useful in managing the credit risk that banks incur as financial intermediaries. Simultaneously, its use introduces regulatory challenges regarding personal data protection. This discussion explores how these two realities interact and what the future may bring.
Credit risk management and AI
AI can significantly enhance credit scoring models. Traditional scoring models rely on structured data (such as credit history, debt, and general payment behavior) obtained from the customer with their consent for processing. However, AI enables the expansion of this analysis through the management of vast amounts of unstructured or semi-structured data from customer activity on social networks, ecommerce, internet searches, and text messages. Consequently, this technology enables banks to process data previously unused and employs powerful algorithms to determine customer behavior patterns with greater precision.
AI enables better assessment of a borrower’s ability and willingness to repay debt
Traditionally, banks have relied on an extensive transaction history of the borrower to make credit decisions, which has significantly restricted access to credit for companies and individuals with poor track records and without collateral. AI may mitigate this limitation by enabling the use of alternative tools to assess the willingness and ability to repay debt.
AI can also play other roles in the bank financing process: it can simplify the modeling of behavioral patterns according to customer segments, allowing for better personalization of credit offers. While the risk profile of some customers may improve with AI, others may see their profile worsen. AI can also help identify non-compliance with regulations or internal bank policies, collect unknown customer data, draft the reports and documents required for issuing loans, calculate relevant ratios, and so on.
What are banks already doing in this area? According to the European Banking Authority (EBA), European banks are already using some forms of AI in internal capital models, more in the estimation of default probabilities than in model validation and collateral valuation. According to the EBA, most banks permitted to use internal models are using various types of regression analysis to determine capital requirements for credit risk, with relatively few banks using decision tree and machine learning algorithms. In any case, banking supervisors must approve the use of AI in internal models.
The EBA has expressed concerns regarding the EU Capital Requirements Regulation (CRR). It has been observed that some machine learning techniques are so complex that it is increasingly difficult for humans to exercise the judgment required by bank solvency rules when using and validating internal models (Article 174 of the CRR). This complexity implies a need to recruit technical specialists to explain the functioning of these models.
Automated processes and data protection
The primary challenges and uncertainties regarding the application of AI in credit risk management revolve around issues of data protection and privacy.
Article 22 of the General Data Protection Regulation (GDPR) prohibits individuals from being subject to only automated processing and credit profiling, if such activity produces legal effects on individuals or significantly affects them. Exceptions to this prohibition include when the individual has consented, when the automated credit rating process is necessary to formalize a contract between the individual and the data manager, or when authorized by national legislation and safeguards are in place (ensuring that no more data is collected than necessary, and that the volume of data collected is proportionate to the bank's purpose).
The approach to automatic process management is restrictive: prioritizing privacy rights over efficiency in financial decisions
A 7 December 2023 judgment of the Court of Justice of the European Union interpreted this article and concluded that a bank cannot engage a credit rating provider that uses automated processes to generate ratings which may have a significant impact on individuals (such as loan denials based on such ratings). Even if the provider does not make the credit decision, the court decided that it plays a key role in that outcome by providing the rating.
This restrictive approach to managing automatic processes places the privacy of data owners above enhanced efficiency in many financial decision-making processes. The implications for banks are clear: they cannot rely on external providers to make decisions on granting loans. If the automated processes are internal, banks must inform individuals about these processes, and if the bank assesses individual creditworthiness based on automated processes, the individual must receive prior information about this action. An element of human judgment must be incorporated into the decision-making process.
EU Directive 2023/2225 on consumer credit agreements faithfully reflects this approach. Banks must inform consumers in a clear and understandable manner when providing a personalized credit offer based on the automated processing of personal data (including an explanation of the logic and risks involved in the automated processing of personal data, as well as its meaning and effects on the decision). Additionally, consumers may express their viewpoint to the lender and request a review of the creditworthiness assessment and the decision to grant a loan.
Data processing principles when profiling clients
According to the GDPR, it is necessary to have a legal basis for processing data as set out in Article 5. The European Data Protection Supervisor (EDPS) has determined that even if data is publicly available on the internet, this does not mean it can be lawfully processed (Article 9-2) given that the data subject may not have explicitly intended the data to be public. Furthermore, processing must comply with the principles of accuracy (requiring an assessment of source reliability), necessity (the data must be adequate, relevant, proportionate, and limited to the purposes of the processing), transparency, and data minimization.
EBA guidance on loan approval and monitoring includes the categories of data that can be used to profile a customer: income and other sources of debt repayment; financial assets and liabilities; and other financial obligations. This means that the data must have a clear relationship to the borrower's ability to repay the debt, without disproportionately affecting their basic rights to privacy and data protection. The aforementioned EU Directive on consumer credit includes similar provisions. As illustrated, the regulatory approach limits the scope of the data a bank can use to aspects closely related to its proven financial capacity.
AI regulation strengthens data protection
According to Article 5 of the European AI Regulation (which will be fully applicable as of 1 February 2027), the use of AI to evaluate or classify individuals or groups of people on the basis of their behavior, socioeconomic status, or personal characteristics is prohibited if the profiling obtained may result in detrimental or unfavorable treatment (such as the denial or a higher price of a loan for the purchase of housing and other basic services) and is unrelated to the context where the data was originally generated or collected. For more information on regulations, we must await the publication in early 2025 of the AI Board's guidelines on the scope of Article 5 prohibitions for banking.
AI systems that assess the creditworthiness of individuals will be rated as high risk and will have very demanding requirements
Article 6 of the AI Regulation establishes that AI systems for assessing individual creditworthiness or to establish their credit rating are considered high-risk, with exceptions for AI systems used to detect fraud. Although the article offers some exceptions, it concludes that an AI system will always be considered as high-risk when it profiles individuals, as it determines whether a person has access to financial resources or essential services (such as housing, electricity, and telecommunications).
The classification of an AI system as high-risk results in stringent regulatory requirements, particularly in relation to risk management. These requirements will be exhaustive and continuous, and the system must be tested during its development process and before being put into service. Data management by the system must also be rigorous, paying particular attention to the original purpose of the collection of personal data; processing operations for data preparation (labeling, filtering, updating, enrichment, and aggregation); assessment of the availability, quantity, and adequacy of the required datasets; and examination of possible biases that may adversely affect fundamental rights or lead to any discrimination, as well as measures to detect, prevent, and mitigate such biases. Finally, these systems will have very strict requirements for technical documentation, assessment of impact on fundamental rights, record-keeping, transparency, human supervision, and cybersecurity.
Need to strengthen regulatory compliance
Article 15(1)(h) of the GDPR gives individuals the right to request information from data controllers about the logic behind their assessment processes. However, the complexity of these algorithms and of the process and parameters followed to decide on a loan application make it difficult to explain the rejection of such an application. Therefore, the obligations of transparency and explainability of the algorithms included in the AI regulation are important for developers and vendors of AI systems. These firms must include the necessary documentation when marketing systems to bank clients, and a robust compliance department becomes essential for avoiding future problems.
There is a consensus that the results obtained with AI may infringe privacy by making confidential data public or leaking data to third parties; that AI can be used by criminals to impersonate identities or make phishing attacks; and that AI can generate reputational risks by using incorrect or outdated data. This necessitates pushing organizations' risk management processes to the extreme.
Conclusions
AI can enhance the efficiency of many lending processes and reduce the risk of loan defaults. In the European Union, AI regulation places significant emphasis on data protection and privacy issues, which limits the use of AI in this area. Banks must ensure that AI systems have robust governance models and safeguards to ensure that they are used correctly. These include fundamental rights impact assessments (Article 27 of the AI Regulation) and data protection impact assessments (Article 35 of the GDPR).
If the algorithms are correct and the data relevant, then AI reveals a reality previously unknown to the bank
What would a highly restrictive approach to AI in this area mean? It would prevent, for example, a customer whose risk profile is improved by AI from benefiting from better bank financing. However, if AI worsens a customer's risk profile, then a restrictive approach benefits the customer. In other words, a restrictive approach would prevent banks from identifying worsening risk profiles, which could lead to more defaults, and would imply a certain degree of credit risk socialization.
Is it objectionable that a customer’s credit profile may worsen due to a more thorough treatment of a larger dataset? If the algorithms are correct and the data is relevant, then AI is only revealing a reality that was previously unknowable by the bank. In other words, AI has not created a new reality. An alternative regulatory policy could offer a more liberal approach to AI that attempts to detect and eliminate variables that create inappropriate biases in credit decisions. Public policies could also be explored to partially compensate the 'losers' resulting from a more thorough credit assessment approach.
The debate on the regulatory approach to AI in the EU needs to be viewed in a global context. It appears that following Trump's triumph, a much less restrictive approach will be applied in the United States. Such a loosening of restrictions will generate competitive disadvantages for European technology companies and banks. Perhaps a new Draghi report will be needed.
Former director general of the Spanish Fund for Orderly Bank Restructuring and board member of the Single Resolution Board
View profile- Compartir en Twitter
- Compartir en Linked in
- Compartir en Facebook
- Compartir en Whatsapp Compartir en Whatsapp
- Compartir en e-Mail
Do you want to receive the Do Better newsletter?
Subscribe to receive our featured content in your inbox.