Brussels – Tomorrow, 2 August, the obligations for generative AI models set out in the world-first European Artificial Intelligence Act that aims to promote the development, market deployment, and responsible use of artificial intelligence in the EU, will enter into force. The rules, aiming to ensure “greater transparency, security, and accountability,” concern the providers of general-purpose AI (GPAI) models on the market, particularly those that present systemic risks, such as the more advanced GPT-4 artificial intelligence model. They are divided into two categories according to the level of risk posed by the systems.
Before placing their products on the market, all suppliers of GPAI models will have to draw up technical documentation, implement a copyright policy, and publish a summary of the model’s training content. Providers of GPAI models with systemic risk—i.e., risks to fundamental rights, security, and the potential loss of control over the model—will have to notify the European Commission of risky models, carry out risk assessment and mitigation, report incidents, and prepare cybersecurity measures. In fact, the demand is for clearer information on how AI models are trained, better enforcement of copyright protections, and more responsible AI development.
https://www.eunews.it/en/2025/05/09/artificial-intelligence-and-culture-europeans-can-no-longer-tell-works-of-art-apart/
To summarise, from tomorrow, providers will have to comply with transparency and copyright obligations when placing GPAI models on the EU market. Models already placed on the market before 2 August 2025 will have to ensure compliance by 2 August 2027. Providers of the most advanced or highest impact models that pose systemic risks, i.e. those exceeding 10^25 FLOPs, will have to fulfil additional obligations, such as notifying the Commission and ensuring model security.
Coming into force on 2 August 2024, the European AI Act has a phased implementation. Most of the rules will enter into force on 2 August 2026. But already in force since 2 February is a ban on AI systems that pose unacceptable risks: such as cognitive-behavioural manipulation of people or specific vulnerable groups (e.g. voice-activated toys that encourage dangerous behaviour in children); socially scored AI (i.e. the classification of people according to their behaviour, socio-economic status or personal characteristics); biometric identification and categorisation of people; real-time and remote biometric identification systems, such as facial recognition in public spaces.
To assist providers in the context of the obligations that come into force tomorrow, the European Commission has published guidelines clarifying who has to comply with these obligations. GPAI models are defined as those trained with more than 10^23 FLOPs and capable of generating language. In addition, the Commission has published a template to help providers summarise the data used to train their models. Finally, the Commission and the Member States also confirmed that the GPAI Code of Conduct, drawn up by independent experts, is an appropriate voluntary tool for providers of GPAI models to demonstrate compliance with the AI Act. Providers who sign and adhere to the Code will benefit from reduced burdens and increased legal certainty.
English version by the Translation Service of Withub







