The EU AI Act Starts Applying (Partially)
Today marks the commencement of the gradual application timeline of the EU Artificial Intelligence Act (AIA). It is worth emphasizing this point to avoid common misinterpretations. While the entire regulation has been in force since August 2024, today specifically marks the application of Title I, “General Provisions,” and Title II, “Prohibited AI Practices.” Under Title I — though often overlooked — there are, in fact, four articles, while Title II contains only one provision.
For months, attention has been fixated solely on the prohibited AI applications, disregarding both the overarching AI literacy obligation imposed on all AI system providers and deployers and the lack of a formal risk classification. This neglect was likely due to the absence of specific enforcement mechanisms associated with the literacy provision. However, now that AI literacy presents a practical sphere of implementation — along with the opportunity to leverage educational initiatives for visibility — it has inevitably entered the conversation. Yet, key provisions concerning subject matter, scope, and definitions remain largely ignored. As legal professionals, we must stress that these provisions also take effect today. Definitions and scope serve as the gateway to any regulation; failing to properly interpret them could render the remaining obligations nothing more than a collection of words. Underestimating these articles or failing to read them with due diligence would be a grave mistake.
AI Literacy: A Long-Ignored Obligation
Let us begin with the AI literacy requirement, which has been overlooked for quite some time — perhaps because it assumes that all stakeholders inherently comprehend the content they read. First and foremost, contrary to some erroneous explanations, this is not a provision aimed at “educating society.” Instead, it imposes a literacy obligation on providers and deployers of AI systems.
The AI Act itself does not detail the precise contents of this obligation, nor does it specify any dedicated sanctions for non-compliance. A forthcoming guidance document from the AI Office is expected, but preliminary indications — particularly from discussions within the AI Pact framework — suggest that the requirement will not be interpreted in an excessively rigid manner. At least initially, it is expected that a general informational and educational component of the AI Act’s contents will suffice.
The target audience for this literacy obligation is not particularly narrow. It extends not only to employees but also, more broadly, to operators, i.e., those interacting with AI systems. However, merely providing a reading list, a policy document, or a set of instructional videos will not be sufficient. Instead, the regulation emphasizes active information dissemination, knowledge transfer, and a contextual understanding of AI systems. In other words, providers and deployers must deliver training that imparts both knowledge and skills — a component that can be generalized to some extent. Additionally, they must incorporate system-specific contextual insights that account for the actual user base — which means compliance will inevitably vary from system to system or at least across different use cases.
Let me reiterate: steer clear of anyone claiming to offer “100% AI Act compliance” solutions at this stage, just as you should avoid those who will, starting tomorrow, claim to provide “AI literacy compliance training.” This obligation cannot be fulfilled without a thorough understanding of your specific AI systems.
On the Prohibitions
As of today, the AI Act’s Article 5 prohibitions are in force across the EU. These are not officially labelled as “systems posing unacceptable risk” but as “prohibited AI practices”. However, this does not mean that all other AI applications are automatically permitted. Additional prohibitions may arise from other EU regulations — GDPR, for example, imposes certain restrictions, as do Digital Markets Act (DMA) provisions targeting gatekeepers. It is crucial to abandon the misconception that the AI Act is the sole regulatory framework governing AI in the EU.
The prohibitions under the AI Act primarily target dystopian AI applications, yet their wording remains highly open to interpretation. Until enforcement mechanisms and governance structures become fully operational, we will likely witness excessive speculation and broad claims. However, as of today, these are not just formal provisions in a legal text — they are actively applicable rules. Any attempt to interpret these broad formulations should be undertaken by individuals well-versed in AI definitions, EU law, and the interpretative approaches of the Court of Justice of the EU (CJEU) and the European Commission. Those working in policy should take particular caution, as this area is now increasingly precarious.
A persistent misconception — likely stemming from outdated reports — is that the AI Act bans biometric AI applications outright. This is incorrect. The Act does not prohibit biometric AI systems in general; it only bans real-time biometric identification in publicly accessible spaces for law enforcement purposes. Other biometric applications are categorized as high-risk, and within that classification, verification processes remain exempt from restrictions. Similarly, manipulative AI systems are not universally prohibited; bans only apply where specific harm and material impact criteria are met. As I mentioned earlier, these distinctions require legal interpretation — not superficial graphics or flashy infographics.
The Question of Penalties
And, of course, there is the matter of penalties. Violations of the prohibited AI applications provision carry the highest penalties under the AI Act. However, the enforcement provisions have not yet taken effect. In other words, as of today, there are no administrative fines for violations of these prohibitions. The European Commission and the AI Office will not start issuing fines tomorrow, but they will begin monitoring.
Contrary to widespread assumptions, it is entirely possible that regulators will discover that the number of banned applications is far greater than initially assumed — but time will tell.
Conclusion
Thus begins the AI Act implementation marathon. To wrap up, I will leave you with an announcement for the third AI Pact webinar on AI literacy. Rather than relying on superficial overviews that claim to summarize 400 pages in two or three pages or one flashy “cheat sheet”, seek knowledge from those who have actually read, analyzed, and understood the text itself and primary sources: