The full and final text of the European law on artificial intelligence has been published in the Official Journal of the European bloc. The new law will take effect on August 1, and in 24 months, until mid-2026, its provisions will be fully implemented for AI developers.
In six months, the EU will begin imposing bans on specific AI applications, such as social credit rating systems, the collection of facial recognition data for databases and real-time emotion recognition systems, in schools and workplaces.
In nine months, the EU will start implementing codes of practice for AI developers. The EU AI office, set up by the European Commission, will work with consultancies to write these codes and with companies that provide general-purpose models that pose systemic risks.
After a year, makers of general-purpose AI models like ChatGPT will have to comply with the new transparency requirements and demonstrate that their systems are safe for users. The EU AI law also includes rules to clearly label deepfakes and other AI-generated images, videos and audio.
Companies that train AI models should respect copyright laws, unless their model is built solely for research and development. EU lawmakers reached political agreement on the first comprehensive AI rulebook in December last year.
The framework imposes different obligations on AI developers, depending on usage and perceived risk. Most uses will not be regulated as they are considered low risk, but some will be prohibited.
High-risk use cases, such as biometric applications or the use of AI in law enforcement, employment, education and critical infrastructure, are allowed under the law, but developers of these applications will face strict requirements, such as quality assurance of the data.