What are the risk categories?
The AI Act divides artificial intelligence into three risk classes: “unacceptable,” “high,” and “low/minimal.”
Placing on the market, putting into service, or using AI systems that pose an unacceptable risk is prohibited. These include AI systems designed to subliminally influence human behavior adversely and those that exploit the weaknesses of vulnerable individuals.
Additionally, the use of AI systems by public authorities to assess or classify the trustworthiness of natural persons (“social scoring”) is prohibited. AI systems may not, in principle, be used for real-time biometric remote identification of natural persons in publicly accessible spaces for law enforcement purposes.
AI systems that pose a high risk to the health and safety or fundamental rights of natural persons are referred to as “high-risk AI systems.” These fundamental rights include human dignity, respect for private and family life, protection of personal data, freedom of expression and information, and freedom of assembly and association.
Unless AI systems are deemed unacceptable and classified as high-risk AI systems, they fall into the low/minimal risk category. These systems are subject to less stringent requirements. However, providers of such systems should still establish codes of conduct and be encouraged to voluntarily apply the regulations for high-risk AI systems. Additionally, the EU AI Act requires that even low-risk AI systems must be safe if they are placed on the market or put into service.
Given the broad definition of AI in the AI Act, it is expected that most AI systems will need to be compliant once the law comes into force.