Understanding the Impact of the EU AI Act on Financial Institutions

The world's first comprehensive AI regulation has revolutionised the AI landscape by ensuring ethical practices, safeguarding fundamental rights, and driving innovation across the European Union and beyond. Learn what this means for financial institutions and your AI strategies.

 

Freya Scammells
AI Practice Lead
freya.scammells@caspianone.co.uk

 

The EU AI Act, the world's first comprehensive AI law, is a game-changer for the ethical and safe use of artificial intelligence within the European Union (EU). Effective August 1, 2024, it aims to foster trustworthy AI, protect fundamental rights, and drive innovation. The Act applies to AI providers and users within the EU and beyond, classifying AI systems into four risk categories. High-risk systems, especially in financial services, face strict requirements like risk management and conformity assessments. This regulation ensures transparency and compliance, pushing financial institutions to adapt and innovate responsibly. This article will explore the Act and the key requirements impacting the financial services industry.

What is the EU AI Act?

The EU AI Act is a landmark regulation aimed at ensuring the safe and ethical development and use of artificial intelligence (AI) within the EU. Proposed by the European Commission in April 2021, the Act entered into force on August 1, 2024, making it the world’s first comprehensive AI law. Its primary goals are to promote trustworthy AI, protect fundamental rights, and enhance innovation in the AI sector.

The Act aims to ensure AI systems are safe and respect existing laws on fundamental rights and values, enhance governance and effective enforcement of existing law on fundamental rights and safety requirements, and facilitate the development of a single market for lawful, safe, and trustworthy AI applications.

The Act applies to providers and users of AI systems within the EU and providers and users of AI systems located in third countries (countries outside the EU, including the UK) if the system's output is used in the EU.

The Act takes a risk-based approach where AI systems are classified into different risk categories:

  • Unacceptable risk

  • High risk

  • Limited risk

  • Minimal risk

High-risk systems are subject to strict obligations, including risk management, data governance, technical documentation, and human oversight. Transparency obligations require users to be informed when interacting with an AI system, and market surveillance authorities will monitor and enforce compliance with the Act. Many operational examples that impact the financial service industry are also found under the high-risk classification.

Impact on Financial Institutions

The EU AI Act will have significant implications for financial institutions, which increasingly rely on AI for various functions such as credit scoring, fraud detection, and automated trading. The regulation aims to ensure that these AI systems are used responsibly and do not pose risks to consumers or the financial system.

Financial institutions must invest in compliance measures, including conducting conformity assessments and ensuring transparency in their AI systems. In some instances, they may need to adjust their AI strategies and operations to align with the new requirements. But it’s not all compliance and red tape; the Act also presents opportunities for innovation by promoting the development of trustworthy AI systems, and transparent, responsible AI can be a driver of business value.

Understanding High-risk Systems

Under the EU AI Act, AI systems are classified based on their risk level. Risk classification includes unacceptable risk, high risk, limited risk, and minimal risk. High-risk systems are subject to the most stringent requirements due to their potential impact on individuals and society.

  • Unacceptable risk: AI systems pose a clear threat to safety, livelihoods, and rights of people, such as social scoring by governments. Systems of this nature will require organisations to rethink their approach to AI

  • High risk: AI systems are used in critical areas such as employment, education, and finance, including credit scoring systems and automated insurance claims

  • Limited risk: AI systems have specific transparency obligations, such as chatbots

  • Minimal risk: AI systems have minimal or no risk, such as AI-enabled video games

In the finance industry, high-risk systems include those used to assess creditworthiness and biometric identification, among other things. These systems must comply with strict requirements to ensure proportionality, fairness, explainability and accountability and will also require a conformity assessment.

Conformity Assessments

A conformity assessment is a process used to demonstrate that an AI system complies with the requirements set out in the EU AI Act. This process is a crucial requirement for high-risk AI systems.

To perform a conformity assessment, entities must:

  • Identify and mitigate risks associated with the AI system

  • Ensure the quality and integrity of the data used by the AI system

  • Maintain detailed documentation of the AI system's design and functionality

  • Implement measures to ensure human oversight of the AI system

  • Continuously monitor the AI system after it has been deployed

Entities responsible for high-risk AI systems must conduct these assessments to ensure compliance and document their outcomes to meet technical documentation requirements.

Other Requirements of the EU AI Act

Building AI systems that comply with the EU AI Act involves several key considerations, especially for financial institutions. To build compliant systems, institutions must understand the specific requirements of the EU AI Act, identify and assess the risks associated with their AI systems, and ensure the quality, integrity, and security of the data used by their AI systems.

Furthermore, they provide clear information to users about the AI system's functionality and decision-making processes, implement mechanisms for human oversight to intervene when necessary, and continuously monitor and update the AI system to address any emerging risks or compliance issues.

Consequences for non-compliance

The EU AI Act outlines significant monetary penalties for non-compliance with the Act’s requirements. Penalties are handed down in three tiers based on the nature of the violation:

  • Non-compliance with the prohibition of the AI practices found under Article 5 of the Act commands fines of up to €35,000,000 or up to 7 % of annual turnover for the preceding financial year, whichever is higher

  • Non-compliance with provisions related to operators or notified bodies is subject to fines of up to €15,000,000 or up to 3 % annual turnover for the preceding financial year, whichever is higher

  • The supply of incorrect, incomplete or misleading information to regulators in reply to a request commands fines of up to €7,500,000 or up to 1 % of annual turnover for the preceding financial year, whichever is higher

Small and medium enterprises found to be in violation of the Act will be subject to the fines listed above. However, the threshold will be up to the lowest amount.

Compliance with the Act will have a major impact on financial services, not least in avoiding monetary penalties attached to non-compliance but also in increasing operational burdens to meet the Act’s requirements. However, the Act also presents the potential to unlock opportunities for building responsible AI systems and delivering untapped business outcomes.

 

Speak to an expert at Caspian One to learn more about our AI Practice and how we can help you to unlock potential in your AI transformation projects.

Disclaimer: This article is based on publicly available, AI-assisted research and Caspian One’s market expertise as of the time of writing; written by humans. It is intended for informational purposes only and should not be considered formal advice or specific recommendations. Readers should independently verify information and seek appropriate professional guidance before making strategic hiring decisions. Caspian One accepts no liability for actions taken based on this content. © Caspian One, March 2025. All rights reserved.

 

In the code block below, add Schema Markup from https://www.google.com/webmasters/markup-helper/u/0/ and test with https://validator.schema.org/

Hello, World!


Read more or search for topics that matter most to you!

 
Next
Next

The Impact of the European Accessibility Act on the Broadcast Industry