Earlier this summer, rumours circulated of a possible pause to the implementation to the EU AI Act (the AI Act), following calls from industry experts, international organisations and even EU Member States to delay the implementation of AI Act by years. Criticisms included perceived regulatory overreach, a resulting heavy burden on industry and the uncertain interpretation of provisions in the AI Act due to delays in publication of key resources, such as the Guidelines and Codes of Practice for General-Purpose AI Models.
There was also a change in international perspectives regarding the AI Act, with some expressing concern about the "Brussels effect", referencing the EU’s intention for the AI Act to be regarded as the international standard for AI regulation. At the Paris AI Summit IN February, the US Government strongly voiced its disapproval of the EU's approach.
Rumours of a pause in implementation were quashed on 4 July 2025 in a press briefing by Thomas Regnier, Commission spokesman, who stated;
"I've seen indeed a lot of reporting, a lot of letters and a lot of things being said on the AI Act. Let me be as clear as possible, there is no stop the clock. There is no grace period. There is no pause. Why? We have legal deadlines established in a legal text. The prohibitions kicked in in February, general purpose AI model obligations will kick in in August."
Subsequent action from the Commission confirmed these comments, with the publication of a package of resources designed to provide clarity and assist providers1 of general-purpose AI (GPAI) models to comply with their obligations under Articles 53 and 55 of the AI Act.
On 10 July 2025, the Commission received the final version of the Code of Practice for General-Purpose Artificial Intelligence Models (the code), swiftly followed on 18 July with the approval of the Guidelines on the Scope of the Obligations for GPAI Models (the guidelines) and finally on 24 July, the Explanatory Notice and Template for Public Summary of Training Content for GPAI Models (the template) was published.
Although this package of resources was released only a few weeks before the AI Act's new rules on GPAI models took effect, the Commission confirmed the AI Act's implementation timeline remained unchanged. GPAI model obligations have applied from 2 August 2025. No 'grace period' or 'pause' was provided. The obligations will be enforceable as of August 2026 for new AI models and August 2027 for existing AI models, demonstrating the EU remains keen to emphasises the "Brussels effect" on AI regulation and further their vision of Europe as an AI continent.
In this article, we take a detailed look at the guidelines and consider the next steps that organisations in scope may wish to take in response.
For more information on the key provisions of the other components of the Commission's GPAI models package of resources, the code and the template, please see our PDF document.
Before considering the guidelines, it is worth noting that although the code is voluntary, the Commission is actively encouraging providers of GPAI models to adhere to the code, albeit confirming that providers can demonstrate compliance though alternative adequate means. The Commission has stated that adhering to the code will offer a straightforward way to demonstrate compliance (while not providing a presumption of conformity with the AI Act, or other relevant laws relating to copyright and data protection) and provide a streamlined compliance process as enforcement will be focused on monitoring providers' adherence to the code.
Guidelines on the scope of the obligations for GPAI models
The guidelines complement the code and while not legally binding, set out the Commission’s interpretation and application of the AI Act, which will guide its enforcement actions. The purpose of these guidelines is to "help actors in the AI eco system understand whether the obligations apply to them and what is expected of them so they can innovate with confidence".
The guidelines clarify the scope of the obligations and to whom they apply (including providers, downstream providers and deployers) across four key topics:
- What constitutes a GPAI models
- Providers of general-purpose AI models
- Exemptions from certain obligations
- Enforcement and transition periods
It is worth considering the key provisions of these topics in further detail.
Definition of GPAI model and GPAI models with systemic risk
Article 3 (36) AI Act defines a GPAI model as:
"an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market".
Readers may recall this definition was criticised when the AI Act was published as being too vague. The guidelines acknowledge that "it is not feasible to provide a precise list of capabilities that a model must display and tasks that it must be able to perform in order to determine whether it is a GPAI Model".
Instead, the Commission's approach is to use the number of computational resources used to train the model measured in FLOPs (floating-point operations) as well as the modalities of the model to assess whether the model is a GPAI model. Therefore, to be an GPAI model it must be:
- Trained with at least 10²³ FLOPS (training compute)
- Can generate language (whether in the form of text or audio), text-to-image or text-to-video, and
- Is capable of competently performing a wide range of distinct tasks
The guidelines acknowledge that training compute is an "imperfect proxy", but it is the most suitable at present. It also confirms that models below the training compute threshold may still qualify as GPAI model if they display a "significant generality" and are capable of competently performing a wide range of distinct tasks and vice versa. The guidelines give some helpful examples of model that are like to fall out of scope (a model filing in damaged or missing parts of images with a 1024 FLOP).
The guidelines also clarify the Commission's criteria for GPAI models with systemic risk.
Article 51 (1) of the AI Act states that:
"A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the following conditions:
- it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks
- based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in Annex XII"
A GPAI model is presumed to have high impact capabilities when the cumulative amount of training computes exceeds 10²⁵ FLOP.
The guidelines provides further detail on the provider's notification provisions under the AI Act. Providers must notify the Commission without undue delay and in any event within 2 weeks where the systemic risk criteria are met or it becomes known it will be met. They also outline the procedure for contesting the presumption of systemic risk classification. A provider seeking to rebut the presumption is responsible for providing sufficient evidence. The Commission will have significant discretion in determining whether a model’s capabilities match or exceed those of “the most advanced models”.
Who qualifies as a provider?
The guidelines aim to offer clarity on when an actor along the AI value chain must comply with the obligations for providers of GPAI models under the AI Act. It outlines the concepts of and provides helpful examples of ‘provider’ and of ‘placing on the market’. Examples of Provider include:
- If actor A has a general-purpose AI model developed on its behalf by actor B and actor A places that model on the market, then actor A is the provider
- If actor A develops a general-purpose AI model and uploads it to an online repository hosted by actor C, then actor A is the provider
Examples (based on Recital 97 AI Act) of when a GPAI model is considered to be placed on the market include:
- a general-purpose AI model is made available for the first time on the Union market via a software library or package
- a general-purpose AI model is made available for the first time on the Union market via an application programming interface (API);
- a general-purpose AI model is integrated into a chatbot made available for the first time on the Union market via a web interface
The guidelines also attempt to clarify when a downstream actor modifying a GPAI model is considered to become a Provider as the AI Act does not specify the conditions required.
The Commission deems it is not necessary for every modification to lead to a downstream modifier being considered as a provider, only if the modification leads to a "significant change in the models generality, capabilities or systemic risk". The Commission gives an "indicative criterion" to demonstrate a significate change as "the training compute used for the modification is greater than a third of the original model's training compute". The guidelines note that it may be hard to apply this threshold if the original compute is unknown but conclude that unless the original model is a model with high impact capabilities (with a 10²⁵ FLOP) it should be based on a third of the threshold for a model being presumed to be a GPAI model (10²³ FLOP). The guidelines also note that "while currently few modifications may meet this criterion" this may increase over time as the compute used to modify models increases.
In the Guidelines FAQ's the Commission has clearly explained their intention of these provisions.
"The AI Act balances innovation and regulation by recognising that not every modification or fine-tuning of a general-purpose AI model should be treated as creating a new model. As such, actors modifying or fine-tuning a model are not automatically subject to all the obligations for providers of general-purpose AI models. The guidelines clarify that these actors become providers only in exceptional circumstances, specifically when the modification or fine-tuning uses more than one-third of the original model's training compute….This high threshold means most fine-tuning, adaptations, and minor modifications will not subject developers to the obligations for providers".
However, as the 'one third of original compute' threshold is only an indicative criterion and given there is no further guidance on the meaning of 'significant change' we foresee that significant changes to GPAI models generality or capability could be made before the threshold is met. Therefore, downstream users should only use this threshold as one element of their assessment of their modification to GPAI models.
Open-source exemptions
The guidelines clarify those conditions providers of GPAI models released under a free and open-source license may be exempt from certain obligations under the AI Act.
Providers of such models may be exempt from the requirement to:
- Maintain technical documentation for authorities
- Provide documentation to downstream AI system providers
- Appoint an EU representative (for non-EU providers)
These exemptions only apply if the model:
- Is released under a truly free and open-source license that allows access, use, modification, and distribution without monetisation
- Makes publicly available the models parameters, including weights, architecture, and usage information
- Is not classified as a general-purpose AI model with systemic risk
Next steps, enforcement and transition periods
The guidelines should be welcomed as offering valuable direction and clarity to providers and downstream parties. Nonetheless, uncertainty in the drafting persists, and we will observe how the Commission interprets the guidelines in practice.
Although the Commission approved the content of the guidelines on 18 July 2025, formal adoption is still required by the Commission at a later date. It is only from that moment that the guidelines will be applicable.
Although the resources were published relatively late, the Commission has reiterated throughout that obligations for providers of GPAI models will apply from 2 August 2025. From this date, providers placing these models on the market must have complied with their AI Act obligations, notifying the AI Office without delay about models with systemic risk to be placed on the EU market.
It is worth reiterating that the Commission’s enforcement powers do not enter into application until 2 August 2026. The Commission will only be take enforcement steps, including fines, after this date.
The Commission has also made clear that in this first year the AI Office will offer to collaborate in particular with Providers who adhere to the code to ensure that GPAI models can be placed on the EU market without delay and if providers adhering to the Code do not fully implement all commitments immediately, the AI Office will not consider them to have broken their commitments under the code. Instead, the AI Office will consider them to act in good faith and will be ready to collaborate to ensure full compliance.
There is a different timeline for providers of GPAI models placed on the market before 2 August. They have until 2 August 2027 to comply.
What to do now?
Businesses affected should act now. In particular:
- Providers should use the guidance to determine if:
- Their models would be categorised as a GPAI model or GPAI model with systemic risk or if any of the exemptions to the classification apply to them (training compute thresholds), or
- They are eligible to benefit from the transitional provisions that extend compliance timelines for existing GPAI models
- Downstream operators should use the guidance to assess whether their modifications to existing GPAI models may result in them becoming GAPI model providers
- GPAI model providers should:
- Review and update contracts, documents and polices to align with applicable obligations under Chapter 5 of the AI Act
- Consider whether adherence to the code may streamline compliance in particular taking advantage of the AI Office proposed collaborative approach over the next year
- Use the template for the required Summary of training data and publish it on their website
[1] Article 3(3) AI Act: ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge