EU and Luxembourg Update on the European Harmonised Rules on Artificial Intelligence—Recent Developments
Overview of the EU AI Act
The European Union (EU) is ushering in a new era of artificial intelligence (AI) regulation with the introduction of the AI act dated 13 June 2024 (AI Act),1 a comprehensive framework designed to govern the development, deployment, and use of AI systems2 across the EU. This regulation, which is being phased in since February 2025, aims to address risks to safety and fundamental rights, enhance market surveillance, and foster a unified market for trustworthy AI. Importantly, the AI Act seeks to avoid overregulation while ensuring a level playing field for both EU and non-EU providers, thereby preventing market fragmentation.
Interplay with Financial Sector Regulation
The AI Act is crafted to complement existing EU financial regulations, such as the Digital Operational Resilience Act (DORA),3 the Capital Requirements Regulation4 and the Payment Services Directive 25 and hence avoid contradictions between the AI Act and current banking or payments rules. Instead, the requirements are largely complementary, with, by way of example, risk-management concepts and governance controls under the AI Act aligning closely with DORA’s IT-risk frameworks and internal governance mandates. Forthcoming guidance from the European Commission is expected to help avoid duplication in reporting.
In an attempt to streamline and harmonize the EU’s digital regulatory landscape, the European Commission, in November 2025, has introduced the so-called Digital Omnibus proposal (Digital Omnibus).6 This package simplifies rules on AI, data access, privacy, and cybersecurity, and is particularly relevant for the financial sector, as it amends and consolidates frameworks such as the AI Act, the General Data Protection Regulation,7 the Network and Information Security Directive 2,8 DORA, and the Data Act.9 Notably, it introduces a single incident reporting point and aligns breach notification thresholds and timelines, reducing compliance burdens and clarifying the use of personal data in AI, including for creditworthiness assessments. The reforms are designed to foster innovation and competitiveness while ensuring robust oversight and security.
Currently, the Digital Omnibus is under the ordinary European legislative procedure and is being examined by the European Parliament and the Council of the EU. Formal adoption is expected later in 2026, but timing depends on negotiations between stakeholders.
Luxembourg Specific Regulation
As an EU regulation, the AI Act is directly applicable across all EU member states. Luxembourg is updating its national legislation through draft bill no. 847610 (the “Bill”), which designates competent authorities and sets out enforcement and procedural rules. In the financial sector, the Luxembourg financial sector supervisory authority (Commission de surveillance du secteur financier (CSSF)), will act as the market surveillance authority for AI systems directly connected to financial services, while the Luxembourg insurance sector supervisory authority (Commissariat aux Assurances) will oversee insurance-related AI systems. Certain transparency and media aspects will be managed by an independent audiovisual authority (Autorité luxembourgeoise indépendante de l’audiovisuel), and the national commission for data protection (Commission nationale pour la protection des données) will remain responsible for data protection interfaces.
Currently, the Bill is being discussed at the Luxembourg parliament.
AI Opportunities in Finance
A survey conducted by the CSSF and the Luxembourg central bank in 2024 highlights the quickly growing adoption of AI in Luxembourg’s financial sector, particularly for internal use cases. Already at that time, approximately 28% of institutions reported having AI use cases in production or development, while 22% were experimenting with AI. Adoption rates are higher among payment and e-money institutions (63%), and banks (38%). It is expected that adoption rates have considerably increased to date.
Prominent use cases include anti-money laundering and fraud monitoring, client onboarding and KYC, process automation, search and summarization, and customer support. There are also pilot projects in credit scoring and analytics. The benefits of AI in finance are clear: improved efficiency, enhanced analytics and personalization, greater accuracy, and 24/7 availability.
Risks and Controls
The implementation of AI systems introduces a range of operational and cyber risks, including data leakage, system failures, malware, and unauthorized access. Governance remains a critical concern, with accountability resting with senior management and requirements for explainability and transparency. Data risks such as quality, privacy, bias, and discrimination must be managed, while model risks include monitoring for accuracy, model drift, and security vulnerabilities. Oversight of third-party providers and upskilling staff are also essential.
Risk-Based Approach
The AI Act classifies AI systems according to the risk:
- Unacceptable risk: prohibited practices, such as social scoring and emotion recognition in workplaces. Providers and deployers must ensure that such systems are not placed on the market, put into service, or used within the EU;
- High risk: subject to strict requirements, including credit scoring and biometric identification. Providers of high-risk systems must implement a quality-management system, maintain technical documentation, enable automatic logging, undergo conformity assessment, register in the EU database, and fulfil corrective actions and transparency duties;
- Transparency risk: must meet information obligations, such as those applicable to chatbots. Providers must ensure that natural persons are informed when they are interacting with an AI system, and that outputs generated or manipulated by AI are clearly marked and detectable as artificial; and
- Minimal risk: permitted with no restrictions. Providers and deployers of such systems are not subject to the detailed compliance obligations applicable to high-risk or unacceptable risk systems.
Penalties for non-compliance are significant: up to €35 million or 7% of worldwide turnover for prohibited practices, up to €15 million or 3% for other infringements, and up to €7.5 million or 1% for supplying incorrect or misleading information. The penalties apply to both EU and non-EU based companies offering AI systems in the EU.
Timeline and Implementation
The obligations under the AI Act will be phased in over several years, with key milestones as follows:
- By 2 February 2025, prohibited AI practices must cease, and AI literacy obligations will begin for all providers and deployers;
- By 2 August 2025, governance provisions and obligations for general-purpose AI models will come into effect;
- By 2 August 2026, high-risk AI systems in the financial sector must comply with specific requirements; and
- By 2 August 2027, the remaining provisions will become fully applicable.
This timeline may be affected by the Digital Omnibus, as the current draft of the Digital Omnibus links, for instance, the effective date of high-risk obligations’ compliance to the availability of standards and support tools, with long-stop dates set for 2 December 2027 (high-risk systems) and 2 August 2028 (product-embedded systems), respectively.
Why This Matters for You
The global reach of the AI Act means that AI providers and financial institutions operating in or interacting with, users in the EU must comply with its requirements, regardless of where they are incorporated or established. The EU’s risk-based approach, with its emphasis on traceability and explainability, transparency, and human oversight, might influence global best practices and regulatory trends.
The alignment of the AI Act with financial sector regulations underscores the need for integrated compliance strategies across jurisdictions. Furthermore, the EU’s support tools and regulatory sandboxes may serve as models for regulators and industry bodies in other regions, fostering innovation and robust oversight.
This publication/newsletter is for informational purposes and does not contain or convey legal advice. The information herein should not be used or relied upon in regard to any particular facts or circumstances without first consulting a lawyer. Any views expressed herein are those of the author(s) and not necessarily those of the law firm's clients.