AI in Business: Totally Prohibited Practices & High-Risk Systems (AI Act Article 5)
Since 2 February 2025, 8 AI practices are completely banned in Europe. Fines can reach €35M or 7% of global annual turnover. And since August 2026, high-risk systems are subject to strict compliance obligations. Here is what your company absolutely must know — and avoid.
- 8 practices banned without exception since 2 February 2025 (Article 5): social scoring, subliminal manipulation, exploitation of vulnerabilities, real-time remote biometric ID in public spaces, and four others — companies must have stopped immediately
- High-risk systems (Annex III, 8 domains) are permitted but heavily regulated: technical documentation (200–300h per system), mandatory human oversight, logs retained minimum 6 months
- Maximum fine for prohibited practices: €35M or 7% of global annual turnover — whichever is higher; SMEs capped at the lower of the two amounts
- For credit and insurance decisions using automated AI: companies must be able to explain the decision in plain language to the individual (right to explanation, GDPR Article 22)
Part 1 — The 8 Completely Prohibited Practices (Article 5 AI Act)
These practices violate fundamental rights and the values of the European Union. They are banned without exception since 2 February 2025. Companies using them were required to stop immediately.
1. Social Scoring
Definition: Evaluating or ranking individuals over time based on their social behaviour or personal characteristics, leading to unfavourable treatment in contexts unrelated to the original data.
Prohibited concrete examples:
- Using an employee's absenteeism score to automatically refuse their bank loan
- A government system that reduces social benefits based on beneficiaries' "online behaviour"
- Access to services conditional on a "citizen score" — modelled on the Chinese system
2. Subliminal Behavioural Manipulation
Definition: Using subliminal techniques beyond the individual's awareness to distort their behaviour in a way that causes them harm.
Prohibited examples:
- Social media algorithms designed to create addiction by exploiting cognitive biases
- Targeted advertising using subliminal visual patterns to induce compulsive purchases
- AI-driven "dark pattern" interfaces to trap users
3. Exploitation of Vulnerabilities
Definition: Specifically targeting vulnerable groups (age, disability, socio-economic situation) to distort their behaviour and cause them harm.
Prohibited examples:
- AI-connected toys inciting children to make dangerous choices
- AI-driven scams targeting elderly people to extract personal data or money
- Applications manipulating people in precarious situations into unfavourable contracts
4. Individual Crime Prediction by Profiling
Definition: Assessing the risk that an individual will commit a criminal offence based solely on their profiling or personality traits.
Prohibited example: Assigning a "criminal dangerousness score" based on race, postal code or demographic data of an individual without any actual offence committed.
5. Untargeted Facial Scraping
Definition: Building or enriching facial recognition databases by indiscriminate scraping of images from the internet or CCTV cameras.
Prohibited examples:
- Scraping LinkedIn, Facebook or Instagram photos without user consent to train a facial recognition model
- Using surveillance cameras to automatically build a biometric database
6. Emotion Recognition in the Workplace and Education
Definition: Analysing and inferring emotions of individuals in professional or educational contexts to assess their performance.
Prohibited examples:
- A recruitment interview tool that analyses a candidate's micro-expressions and automatically rejects those who "seem nervous"
- Student monitoring software that detects their "attention level" via webcam and penalises the "disengaged"
- A system evaluating remote workers' "motivation" via their camera
7. Sensitive Biometric Categorisation
Definition: Classifying individuals to infer protected characteristics: racial or ethnic origin, political opinions, religious beliefs, sexual orientation.
8. Real-Time Biometric Identification in Public Spaces
Definition: Deploying "live" facial recognition in public spaces by law enforcement.
Limited exceptions: Searching for missing persons, preventing documented imminent terrorist threats — only with prior judicial authorisation.
Part 2 — High-Risk AI Systems (Annex III)
These systems are permitted but subject to strict obligations: technical documentation (200–300h per system), quality management system, mandatory human oversight, logs retained for a minimum of 6 months.
The 8 High-Risk Domains and Their Use Cases
- Biometrics: Remote biometric identification (post-facto), emotion recognition systems, biometric categorisation of sensitive attributes
- Critical infrastructure: Road traffic management, water, gas, electricity and heating networks
- Education and training: Student admissions and selection, learning assessment, exam monitoring, career guidance
- Employment and HR: CV screening, promotion or dismissal decisions, performance assessment, employee behavioural monitoring
- Essential services: Credit scoring, eligibility for social benefits, health/life insurance pricing
- Law enforcement: Assessment of evidence reliability, recidivism risk prediction, AI polygraphs, behavioural profiling
- Migration and asylum: Border risk assessment, asylum and visa application processing, detection and identification of persons
- Justice and democracy: Assisting a judge in interpreting facts or law, systems influencing elections or referendums
Practical Example: Is Your Recruitment Tool High-Risk?
Documented real case: A major technology company used an AI recruitment tool trained on 10 years of historical hiring decisions. Result: the system systematically penalised female candidates for technical positions — because the historical data reflected human bias. The company had to abandon the tool.
Under the AI Act: Any AI CV screening or hiring decision support tool is classified as high-risk. Mandatory technical documentation, algorithm fairness audit, human oversight of every final decision.
Credit Scoring: Between High Risk and Discrimination
A bank using AI to assess client creditworthiness must:
- Document the variables used and justify their relevance
- Exclude any discriminatory proxy (postal code that could serve as a racial proxy)
- Allow a human to challenge and override the automatic decision
- Explain the decision to the client in plain language (right to explanation, GDPR Article 22)
What Your Company Must Do Now
- Audit immediately your AI systems: are you using any of the 8 prohibited practices?
- Classify each system according to the 4 AI Act risk levels
- For high-risk systems: start technical documentation (200–300h), implement human oversight, activate logs
- Train your staff on AI literacy — legal obligation since February 2025 (Article 4)
- Consult the CNIL (French data protection authority) if unsure about classifying a biometric or HR system
AI Compliance Training: Master Article 5 and Annex III
Our "Regulatory Framework Fundamentals" module covers in detail the prohibited practices and obligations for each risk level — in 3 to 7-minute microlearning format, certified, CPF-eligible (Personal Training Account — French government funding).
View the courseCheck in 10 minutes whether your AI systems fall under AI Act prohibitions. Free PDF for legal and tech teams.
Download free