FR EN
AI Act (Regulation 2024/1689): The Complete Guide for Businesses in 2026
AI Act (Regulation 2024/1689): The Complete Guide for Businesses in 2026 — AutomationDataCamp
April 13, 2026 ADC Team 10 min read AI Compliance

AI Act (Regulation 2024/1689): The Complete Guide for Businesses in 2026

Entered into force on August 1, 2024, the European regulation on artificial intelligence — the AI Act (Regulation 2024/1689) — is the world's first comprehensive legal framework regulating AI. For European businesses, the deadlines are already here. Here is what you need to know and do before August 2026.

Key takeaways
  • Applies to every company using AI on the European market, regardless of where it is based (Amazon, Microsoft, French start-ups…)
  • Critical deadline: 2 August 2026 — full application for high-risk systems (Annex III)
  • Prohibited practices since 2 February 2025: social scoring, subliminal manipulation, real-time biometric identification in public spaces
  • AI literacy mandatory (Article 4) for all staff deploying or supervising AI systems
€35M or 7% of global turnover — prohibited practices
€15M or 3% of turnover — other infringements
Aug 2026 Full application for high-risk systems
200–300h Documentation required per high-risk system

What is the AI Act?

The AI Act is a European regulation based on a risk-based approach: the more an AI system poses risks to fundamental rights and people's safety, the stricter the obligations. It applies to any company that develops, deploys or uses AI systems on the European market — including American or Chinese companies whose customers are in Europe.

The 4 Risk Levels

1. Unacceptable Risk — Prohibited Practices (Article 5)

These practices have been completely banned since February 2, 2025. They violate the fundamental values of the European Union:

  • Social scoring by public authorities
  • Subliminal behavioural manipulation beyond the individual's awareness
  • Exploitation of vulnerabilities (age, disability, socio-economic situation)
  • Individual crime prediction based solely on profiling
  • Untargeted scraping of faces from the internet or CCTV cameras to create facial recognition databases
  • Emotion recognition in the workplace or in education
  • Biometric categorisation to infer racial origin, political opinions or sexual orientation
  • Real-time biometric identification in public spaces (except for missing persons or imminent terrorist threats)

2. High Risk — Enhanced Obligations (Annex III)

These systems are permitted but subject to strict requirements before market placement:

  • Biometrics: Remote biometric identification (post-facto), emotion recognition
  • Critical infrastructure: Road traffic management, water/gas/electricity networks
  • Education: School admissions, assessment of learning, exam monitoring
  • Employment: CV screening, promotion or dismissal decisions, performance evaluation
  • Essential services: Credit scoring, eligibility for social benefits, health/life insurance pricing
  • Justice: Recidivism prediction, AI interpretation of facts for a judge
  • Migration: Border risk assessment, asylum application processing
  • Democracy: Systems influencing elections or referendums

3. Limited Risk — Transparency Obligations

These systems must inform the user that they are interacting with an AI:

  • Chatbots and virtual assistants
  • Deepfakes (AI-generated text, images, video, audio)
  • General Purpose AI (GPAI) models such as ChatGPT, Gemini, Claude

Practical example: An e-commerce site using a chatbot without clearly stating "this conversation is handled by a virtual assistant" faces fines of up to €7.5M or 1% of global annual turnover.

4. Minimal Risk — No Specific Obligations

Spam filters, recommendation systems, video games. No regulatory constraints imposed by the AI Act.

The Full Compliance Timeline

  • 13 June 2024: Adopted by the European Parliament and Council
  • 1 August 2024: Regulation enters into force
  • 2 February 2025 ?: Prohibitions (Article 5) in force + AI literacy obligation (Article 4) — mandatory staff training
  • 1 May 2025: Compliance codes for GPAI models available
  • 2 August 2025: Obligations for GPAI models (ChatGPT, Gemini, Llama…)
  • 2 February 2026: Commission guidelines on high-risk classification
  • ?? 2 August 2026: Full application — High-risk systems (Annex III) + transparency rules
  • 2 August 2027: Medical devices, toys and regulated products incorporating AI (Annex I)
  • 31 December 2030: Large public administration IT systems

What Businesses Must Put in Place

For high-risk systems, the AI Act requires a comprehensive governance framework:

  • Quality Management System (QMS): Documented policies, procedures and instructions for compliance, data management and risk mitigation
  • Technical documentation: Complete description of the system, its algorithms and data sources — expect 200 to 300 hours to document a single system
  • Human oversight: High-risk systems must allow a human to monitor, intervene and override decisions
  • Event logs: Automatic retention for at least 6 months
  • AI literacy (Article 4): Mandatory training for all staff deploying or supervising AI

Realistic duration and cost of a compliance audit: 3 to 6 months of work, representing between 10 and 20% of the company's total AI investment.

Case study: a 200-employee industrial SME

A mechanical engineering company uses three AI tools:

  • A customer service chatbot ? Limited risk: display "virtual assistant", comply within 1 week
  • A machine failure prediction system ? High risk (critical infrastructure): full documentation, audit, mandatory human oversight
  • A CV screening tool ? High risk (employment/recruitment): documentation, transparency, algorithm fairness audit

Conclusion: Compliance as a Competitive Advantage

The AI Act is not just a legal constraint: compliant companies benefit from a competitive edge in public tenders (where compliance has become an eliminatory criterion), increased investor confidence and protection against litigation. Non-compliance is an aggravating factor in algorithmic discrimination lawsuits.

Frequently Asked Questions about the AI Act

When does the AI Act come into force?

The AI Act entered into force on 1 August 2024. Prohibited AI systems must be dismantled by 2 February 2025. Obligations for high-risk systems apply from August 2026. Full enforcement for all systems begins in August 2027.

Which companies are affected by the AI Act?

All companies that place AI systems on the European market or put them into service in the EU are affected, regardless of where they are established. This includes SMEs and start-ups, with some exemptions for research and development.

What are the penalties for non-compliance with the AI Act?

Penalties vary: up to €35M or 7% of global turnover for prohibited AI uses; up to €15M or 3% for other violations; up to €7.5M or 1% for incorrect information to authorities.

Train your teams on the AI Act with AutomationDataCamp

Our AI Compliance & AI Act training (45h, certified, CPF-eligible — Personal Training Account, French government funding) covers the entire regulation in microlearning format, with 3 to 7-minute modules targeting 80%+ completion rate.

View the course

Related articles

AI Risks & Sanctions: What Companies Face

€35M or 7% of turnover: a complete breakdown of fines and operational consequences...

Read more

AI Laws: USA, Europe, France — 2026 Comparison

How the AI regulatory framework differs between the United States, Europe and France...

Read more