European Commission Opens Consultation on EU AI Act Serious Incident Guidance
Time 2 Minute Read

On September 26, 2025, the European Commission initiated a public consultation on draft guidance and a draft reporting template for serious artificial intelligence (“AI”) incidents under Regulation (EU) 2024/1689 (the “AI Act”).

The guidance and template are designed to help providers of high-risk AI systems prepare for new mandatory reporting requirements detailed in Article 73 of the AI Act. These obligations, which take effect from August 2026, require providers to notify national authorities of serious incidents involving their AI systems. According to the European Commission, the aim of Article 73 of the AI Act is to facilitate early detection of risks, enhance accountability, enable swift intervention, and foster public confidence in AI technologies.

Key aspects of the draft guidance include:

  • Clarification of terms: The guidance defines key concepts related to serious AI incidents and reporting obligations.
  • Practical examples: Scenarios are provided to illustrate when and how incidents should be reported. For example, an incident or malfunction may include misclassifications, significant drops in accuracy, AI system downtime or unexpected behaviors.
  • Reporting obligations and timelines: The guidance sets out the different obligations that apply to different actors, including providers and deployers of high-risk AI systems, providers of GPAI models with systemic risk, market surveillance and national competent authorities, the European Commission and the AI Board.
  • Relationship to other laws: The guidance explains how these AI-specific rules interact with broader legal frameworks and reporting obligations, such as the Critical Entities Resilience Directive, the NIS2 Directive and the Digital Operational Resilience Act.
  • International alignment of reporting regimes: The guidance seeks consistency with global efforts such as the AI Incidents Monitor and Common Reporting Framework of the Organisation for Economic Co-operation and Development (“OECD”).

Stakeholders are encouraged to review the draft guidance and reporting template, and submit feedback by November 7, 2025 (view the consultation).

You May Also Be Interested In

Time 3 Minute Read

The Connecticut Attorney General recently issued a legal memorandum regarding the application of existing Connecticut laws, such as the Connecticut Data Privacy Act, to the use of artificial intelligence.

Time 1 Minute Read

As reported on the Hunton Employment & Labor Perspectives blog, SB 574 is a California bill that would set specific duties for attorneys who use generative artificial intelligence and would restrict how arbitrators may use such tools in decision-making.

Time 3 Minute Read

SB 574 is a California bill that would set specific duties for attorneys who use generative artificial intelligence and would restrict how arbitrators may use such tools in decision-making. It would amend provisions in the Business and Professions Code and the Code of Civil Procedure to address confidentiality, accuracy, bias, and citation verification for attorneys, and to prohibit delegation of arbitral decision-making to AI while adding disclosure and responsibility requirements for arbitrators.

Time 3 Minute Read

The results are in: attorneys are filing more employment law cases in court.  Indeed, year-end reporting from legal databases like LexMachina confirm that the pace of filing new employment discrimination cases reached its highest level in 2025, surpassing 20,000 new filings nationwide.  Though overtime and minimum wage lawsuits under the Fair Labor Standards Act (FLSA) have continued to decline since 2015, discrimination cases under laws like Title VII of the Civil Rights Act of 1964 and the Americans with Disabilities Act are on the rise.

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Archives

Jump to Page