Brussels – The European Union must ban the sale of so-called “nudifier apps“, artificial intelligence systems capable of altering images of real people to create, without their consent, nude or sexually explicit versions. This is the substance of a joint amendment to the Artificial Intelligence Act (AIA), which the European Parliament’s committees on the Internal Market and Consumer Protection and on Civil Liberties and Justice adopted today (18 March) with 101 votes in favour, 9 against, and 8 abstentions. The AI Act, which comes into force in 2024, is the EU regulation on the use of artificial intelligence, and the European Commission initiated the amendment process in November 2025.
The European Parliament had announced its intention to push for a ban on “nudifier” apps as early as mid-January, when GROK, the AI assistant of the social network X, found itself at the centre of a controversy concerning the mass generation of content that “undressed”—virtually and without their knowledge—real people. According to data released by the NGO Centre for Countering Digital Hate, GROK is said to have produced 3 million sexually explicit images and 20,000 artificial reproductions of child sexual abuse over an eleven-day period between late 2025 and early 2026, before the platform controlled by Elon Musk announced measures to combat their spread. But it is not just about Grok, as Michael McNamara, co-rapporteur for the Committee on Civil Liberties, Justice and Home Affairs, stated: “The proposal was eagerly awaited by our citizens.”
In addition to the ban on “nudifying” AI systems, the two committees also approved further amendments to the AI Act. In particular, MEPs decided to postpone the entry into force of certain rules on so-called “high-risk” AI systems, such as those that use biometrics or are deployed in critical sectors such as infrastructure and healthcare. Current legislation requires companies to comply with the new rules by 2 August this year, but according to EU lawmakers, “the definition of key standards is unlikely to be ready by this date.” For this reason, two new deadlines are being proposed: 2 December 2027 for high-risk AI systems listed directly in the law, and 2 August 2028 for those already subject to other EU regulations on safety and market surveillance. The committees also welcomed the extension of the timeframe for companies to comply with the rules on so-called watermarking, i.e., indicators showing when content has been generated using artificial intelligence. In this case, however, MEPs suggest a shorter extension until 20 November 2026, rather than the European Commission’s proposed deadline of 2 February 2027.
The latest set of amendments approved today proposes a series of measures to make the rules on AI “simpler and more flexible” for businesses, ensuring they do not become an obstacle to innovation. These include the extension of support measures for the use of artificial intelligence, already provided for small and medium-sized enterprises (SMEs), to so-called Small Mid-Cap Enterprises, companies of a size intermediate between SMEs and large groups, and the relaxation of the obligations under the AI Act for products already regulated by sector-specific European laws (such as medical devices, radio equipment, and toys). Another point concerns potential biases in artificial intelligence systems. Companies will be able, in limited cases and with appropriate safeguards, to use personal data to identify and correct these biases. The aim is to make systems fairer: for example, to prevent an algorithm used to select job candidates from favouring men over women simply because it was trained on unbalanced data.
The plenary session in Strasbourg will vote on the amendments approved today on 26 March, the date from which negotiations with Member States within the Council of the EU may begin.
English version by the Translation Service of Withub







