Introduction to the topic Under Article 6 of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689*), an AI system is classified as high-risk if it either (1) forms part of a product regulated by EU harmonised legislation—such as medical devices, vehicles, or industrial machinery—or (2) performs a function listed in Annex III, including use in areas like education, law enforcement, employment, and biometric identification. The Act also mandates that any AI system used for profiling natural persons—automated evaluation or prediction of individuals' traits, preferences, or behaviour—is automatically classified as high-risk. Even when a provider believes a system does not pose serious risks, its profiling function alone triggers this classification. Providers of such systems must either conduct a formal risk assessment to justify otherwise and still register the system in the EU database, or they must comply with the full framework of regulatory obligations, including documentation, transparency, testing, and human oversight. This high-risk classification ensures that AI systems which can significantly impact people’s lives, safety, or rights are subject to stricter scrutiny and accountability—ultimately promoting trust and lawful innovation across the EU.
Lip19
One of the most significant innovations introduced by Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024—commonly referred to as the EU Artificial Intelligence Act (AI Act)—is its risk-based regulatory approach.