aicontract.ing Open in urlscan Pro
81.187.132.224  Public Scan

URL: https://aicontract.ing/
Submission: On October 23 via api from BE — Scanned from GB

Form analysis 0 forms found in the DOM

Text Content

AI CONTRACT.ING

An easier way to draft AI clauses

A tool to make it easier to use the SCL AI Act template clauses.

Filter for relevant clauses and change the defined terms on the left. Click to
select [drafting][options], and then copy the clause text with selected options,
or with all options.

AI Solution
Agreement
The Deployer
The Provider

FILTERS

AccuracyAI ActAI
LiteracyBreachCertificationChangeClassificationComplaintsComplianceConformity
AssessmentCorrective ActionsCybersecurityData GovernanceDeployerDocumentationEU
AI ActExemptionsFundamental
RightsGPAIHRAISIncidentsIndemnityInformationIPRMilitaryMisuseOpen-sourceOversightPost-market
MonitoringProhibitionsProviderPurposeR&DRiskScopingShort FormSuspensionSystemic
RiskTerminationTerritorialityThird PartiesTransparency


SCOPING

AI SOLUTION

The EU AI Act applies to AI systems. Parties may wish to clarify if the contract
involves an AI system and is subject to the EU AI Act.

Is an AI system


[The Provider warrants and represents that] OR [The Parties agree that]:

(a) the AI Solution is an 'AI system' as defined in the EU AI Act and is subject
to the relevant provisions of the EU AI Act.

Not an AI system


[The Provider warrants and represents that] OR [The Parties agree that]:

(a) the AI Solution is not an 'AI system' as defined in the EU AI Act, and
neither Party shall suggest otherwise.

We expect the Provider to be better placed to assess the AI system according to
the EU AI Act.

Neither Party should agree on the categorisation of the AI system without due
diligence.

If the Parties agree that the technology is not an AI system, this may not
prevent a regulator from treating it as such.

In any event, the Parties may wish to include more details regarding external
communications and regulatory responses.

It is worth considering whether future versions of the AI system might impact
these assessments.

GENERAL PURPOSE AI MODEL (WITH OR WITHOUT SYSTEMIC RISK)

The EU AI Act applies to GPAI models, as defined under the EU AI Act (each a
'GPAI Model'). Parties may wish to clarify whether their contract involves a
GPAI Model, distinct from an AI system or another model, and is therefore
subject to the EU AI Act or that they consider that the technology doesn’t
constitute a GPAI Model under the EU AI Act.

Furthermore, they may wish to clarify that the model either is or isn’t a GPAI
Model with systemic risk.

Is a GPAI Model


[The Provider warrants and represents that] OR [The Parties agree that]:

(a) the AI Solution is or involves a GPAI Model [with] OR [without] systemic
risk and is therefore subject [in principle] to the relevant provisions of the
EU AI Act.

Not a GPAI Model


[The Provider warrants and represents that] OR [The Parties agree that]:

(a) the AI Solution does not involve a GPAI Model (whether with or without
systemic risk) and is therefore not subject to the relevant provisions of the EU
AI Act, and [subject to any legal compulsion to do so] neither Party shall
suggest otherwise to any third party.

Whether or not the technology constitutes a GPAI Model may be a difficult
question and is likely to be one for which the Provider will have given
considerable thought. This is likely something that the Provider alone will need
to determine, and in any event, the Provider's view or any contractual agreement
will not prevent a competent regulator from determining the status of that
technology as a GPAI Model under the EU AI Act.

Whether a GPAI Model has systemic risk is subject to a specific regime under the
EU AI Act, and the contract is likely to have even less impact on this point.

Where the provider of the GPAI Model has determined that it does not have
systemic risk, they are likely to want to ensure that neither party suggests
otherwise. This is something that may need to be clarified in the contract.

An issue may arise regarding downstream fine-tuning or modification of a GPAI
Model. For example, a downstream customer could fine-tune it to such an extent
that it might be deemed to have created a new GPAI Model and will become the
provider of that new model. However, GPAI Models are defined under Art 3(63) by
their inherent capabilities, not their actual usage or fine-tuning, so
downstream usage is unlikely to affect the upstream obligation on the GPAI Model
provider.

Indeed, the definition in Art 3(63) states that an AI system may involve a GPAI
Model 'regardless of the way the model is placed on the market'. The Parties may
wish to consider drafting to prevent downstream fine-tuning or to allocate
responsibility for compliance with the EU AI Act to the downstream customer
where it might be deemed to have created a new GPAI Model.

CLASSIFICATION OF GPAI MODELS AS GPAI MODELS WITH SYSTEMIC RISK

Downstream entities may seek more granular assurances that the GPAI Model is not
trained with compute greater than 10^25 floating point operations (FLOPs) to
avoid the high impact presumption.

Drafting


The Provider further represents and warrants that at the date of this Agreement
[the relevant GPAI Model does not have high impact capabilities according to Art
51(1) of the EU AI Act] OR [the cumulative amount of compute used for the
training of the relevant GPAI Model is not greater than 10^25 floating point
operations].

Under the EU AI Act as published, a GPAI Model will be presumed to have high
impact capabilities when the cumulative amount of compute used for its training
is greater than 10^25 FLOPs. This is generally not a straightforward
calculation.

The European Commission may change this threshold in future. Therefore, the
Deployer may wish to incorporate a more general clause stating that the GPAI
Model does not have 'high impact capabilities' under Art 51(1) of the EU AI Act.

Note the definition of GPAI Model suggested in section GPAI Model
Classification.

INFORMATION REGARDING CLASSIFICATION

Downstream entities (e.g. the Deployer and subsequent reseller(s)) may want
assurances that the classifications are correct and the Provider has properly
investigated this and has reasonable grounds for such decision, with possible
indemnity if the European Commission disagrees with the Provider’s designation,
albeit it is not easy to see where loss would be.

Drafting


The Provider will, within [insert] days of the execution of this Agreement and
thereafter upon the Deployer’s request: (a) inform the Deployer of the
reasonable grounds for classifications under clauses [[AI Solution]], [[GPAI
Model Classification]], and [[GPAI Model Classification - Systemic Risk]]; and
(b) provide the Deployer with the information that the Deployer may need in
order to evaluate such classification, including details of the computation used
for training of a GPAI Model.

This is no substitute for proper, pre-contract due diligence.

If there has been a misrepresentation or classification error, the Deployer may
want the right to terminate or renegotiate. See below in the Termination Events
section.

PROVIDER OF {AI SOLUTION}

The EU AI Act imposes different obligations across the AI supply chain. The
'provider' (typically the Provider) of an AI system will be subject to more
burdensome obligations than the 'deployer' (typically the Deployer), who is
generally the customer.

The Parties may therefore wish to clarify their position in the supply chain,
particularly to identify which Party is the 'provider' (the Provider).

Drafting


The Parties agree that the Provider is the 'provider' of the AI Solution for the
purposes of the EU AI Act and is subject to the corresponding obligations under
the EU AI Act.

Whilst the Parties’ contractual agreement is unlikely to impact a regulator’s
view of whether or not a particular organisation is the 'provider' of the
relevant AI system (generally the Provider), it might give the Parties,
particularly the customer, reassurances that the Provider acknowledges its
position under the EU AI Act.

It may be more helpful for the Parties to agree that the Provider will comply /
has complied with its 'provider' obligations under the EU AI Act.

There may be instances where both Parties contribute to the development of the
AI system (co-developed AI). In this case, the Parties will need to consider how
the provider role and obligations apply to this scenario under the EU AI Act.
This should be reflected in the relevant clause.

DEEMED PROVIDER UNDER THE EU AI ACT

Providers are subject to more burdensome obligations under the EU AI Act than
deployers, particularly for high-risk AI systems ('HRAIS'). The EU AI Act
contains provisions which can 'deem' a deployer the provider of a HRAIS
(replacing the original HRAIS provider) if the deployer: (a) puts their name or
trademark on the HRAIS, (b) makes a substantial modification to a HRAIS, or (c)
uses a non-HRAIS as a HRAIS. The Parties may therefore wish to make clear
whether any of these circumstances are intended to apply, to give certainty
about their respective roles under the EU AI Act.

Deemed Provider


The Deployer will deploy the AI Solution in its own name (rather than in the
name of the Provider) and it shall accordingly be the 'provider' of the AI
Solution in place of the Provider for the purposes of the EU AI Act.

Avoid Deemed Provider


The Provider acknowledges that the Deployer will not:

(a) put its name or trademark on the AI Solution;

(b) make a substantial modification to the AI Solution, including (without
limitation) a change that materially alters its intended purpose, design, or
performance;

(c) use the AI Solution for a high-risk purpose for the purposes of the EU AI
Act.

Again, the Parties’ contractual agreement will not in and of itself determine
the application of the EU AI Act in terms of identifying the provider /
deployer.

However, this could be a particularly helpful point to capture in a contract
because, if, for example, in the first scenario the Deployer doesn’t deploy the
AI system in its own name, resulting in the Provider being treated as the
provider of the AI system by a regulator under the EU AI Act, then the Provider
may have a breach of a contract claim against the Deployer, potentially
entitling the Provider to recover losses it may have suffered by virtue of
having being so treated (but consider potential issues regarding recoverability
of losses – particularly arising from regulatory fines).

The Parties should consider the drafting in [High] relating to trademarks when
including this clause.

EXTRA-TERRITORIALITY UNDER THE EU AI ACT

The EU AI Act has extra-territorial effect. It applies to:

(a) providers who place on the market or put into service AI systems or GPAI
Models in the EU, whether or not they are established in the EU;

(b) deployers of AI systems who are located within the EU; and

(c) providers and deployers of AI systems that are located outside the EU, but
whose output (e.g. content, recommendations, analysis, and decisions) is used in
the EU.

Organisations located or conducting business outside of the EU may wish to
ensure that their AI system does not fall within the scope of the EU AI Act and,
to that end, they may wish to include appropriate provisions in any contracts
relating to their AI systems.

Neither make available


The Parties agree that neither of them shall make available or use (or permit
the making available or use of) the AI Solution in the EU.

Deployer not use within


The Deployer agrees that it shall not use (or permit the use of) the output
produced by the AI Solution in the EU.

This clause is only needed when:

(a) the Deployer is not located/established in the EU and does not use the AI
output in the EU; and

(b) the Provider does not place the AI Solution on the market/puts it into
service/uses the AI output in the EU.

This drafting is intended to give protection (particularly to the Provider) in
case the Parties act in a way which brings the AI system into scope of the EU AI
Act.

If, for example, the Deployer makes the AI system available in the EU (e.g. by
licensing it) or uses the AI system (or even just its output) in the EU, then
the Provider may have a breach of contract claim against the Deployer,
especially if the Provider suffers losses (e.g. a regulatory fine) due to this.

The Provider may wish to bolster its protection by including an indemnity (but
consider potential issues regarding recoverability of losses from the Deployer
generally, and particularly any arising from regulatory fines).

Both Parties may wish to extend the protection in these provisions by requiring
each of them to ensure that, for example, affiliates also do not make available
or use the AI system (or its outputs) in the EU.

The Parties may also wish to consider obligations around applying technical
controls to prevent unintended scope creep of the AI Solution, to prevent these
issues arising at a technical level (to the extent possible).

EXEMPTIONS UNDER THE EU AI ACT

The EU AI Act contains two key exemptions where its obligations do not apply.
These exemptions cover the use of AI systems that are strictly limited to:

(a) military, defence or national security purposes; or

(b) scientific research and development.

Parties who are developing/using AI systems for these specific purposes and who
are comfortable (e.g., from a commercial perspective) in limiting the use of the
AI system to these purposes are likely to want to include contractual provisions
ensuring this.

Solely military, defence or national security


The Parties agree that the intended purpose of the AI Solution is solely for
military, defence, or national security use.

The Deployer agrees that it shall use the AI Solution solely for military,
defence, or national security use.

Research and development


The Parties agree that the AI Solution will be jointly developed and/or used for
the sole purpose of scientific research and development, and neither Party shall
commercialise or suggest that the AI Solution is commercially available without
the prior written consent of the other Party.

The EU AI Act does not define 'military, defence or national security' or
'scientific research and development', although some explanatory content is
included in Recitals (24) and (25). The Parties may therefore wish to include
additional detail to clarify the scope of these areas, although this may not
necessarily align with the view of an EU AI Act regulator.

As with the territoriality provisions above, the Parties may wish to bolster the
protection here by adding, for example, an indemnity and/or an obligation to
ensure that affiliates are also subject to these restrictions.

FREE AND OPEN-SOURCE UNDER THE EU AI ACT

The EU AI Act does not apply to AI systems released under a free and open-source
licence, unless that AI system is used as a prohibited AI system (as set out
under Art 5 of the EU AI Act), HRAIS, or an AI system to which the transparency
obligations in Art 50 of the EU AI Act apply.

The EU AI Act clarifies that 'free' in this context means that the AI system
should not be provided in exchange for a monetary fee or an equivalent value,
e.g., the use of personal data (other than in limited circumstances).

The provider of an AI system which is intended to be free and open-source
(within the meaning of the EU AI Act) may wish to clarify, e.g., in its T&Cs or
any particular contract with a counterparty, that the AI system is intended to
be made available on this basis.

It may also be prudent to contractually require the deployer not to use the AI
system in such a way which might call into question the legitimacy of the free
and open-source licence, e.g., as a HRAIS.

Free and open-source


The Parties acknowledge and agree that the AI Solution is provided on a free and
open-source basis.

The Deployer agrees and acknowledges that it has not and will not provide any
remuneration or any other value in return for the use of the AI Solution.

Not free and open-source


The Parties acknowledge and agree that the AI Solution is not provided on a free
and open-source basis and EU AI Act Art 2.12 does not apply.

As with other provisions, the contractual status of the licence under which the
AI system is supplied will not be determinative of the regulatory position under
the EU AI Act. However, it is likely to provide a helpful basis to support any
argument that a particular AI system is provided under a free and open-source
licence.

This drafting relates principally to the main exemption for free and open-source
licensed AI systems under Art 2(12) of the EU AI Act.

It is worth considering whether there will be a negotiated contract for free and
open-source AI systems, as such AI systems may be released freely on the
internet subject to standard open source licences (such as MIT, BSD, GPL, Apache
etc.).('FOSS Licence')

The Provider may want to consider modifying or enhancing the preferred FOSS
Licence (to the extent permissible) to capture the intent of these suggested
clauses, and the Deployer may want to consider the impact of using open source
AI subject to a FOSS Licence that does not incorporate language to this effect.

CHANGES UNDER THE EU AI ACT

The EU AI Act expressly contemplates changes to the regulatory status of an AI
system, e.g., through concepts such as 'substantial modification' (see Art
3(23)) and the change of an intended purpose from non-high-risk to high-risk
(see e.g., Art 25(1)(c)).

Furthermore, changes to the AI system (e.g., technical changes or upgrades)
could also impact the regulatory status of an AI system or GPAI Model under the
EU AI Act.

Parties may therefore wish to engage a degree of control over any such changes
through their contract, particularly where the counterparty may be able to
affect this regulatory status (whether intentionally or not) by making changes
to the AI system.

Drafting


The Parties acknowledge and agree that this Agreement is entered into based on
the status of the AI Solution as represented by the Parties’ statements in this
clause [including without limitation [XX]].

The Provider will promptly inform the Deployer if, during the term of the
Agreement and for the purposes of the EU AI Act, the AI Solution becomes or
involves a change in classification or status from that provided in clauses
[insert as applicable above].

No Party shall make any change to the AI Solution and/or any change to the use
of the AI Solution that would affect or alter such status of the AI Solution
under the EU AI Act, except by the express agreement of the Parties as agreed in
writing [pursuant to the Change Control Procedures].

The Parties should ensure that any controls or restrictions on the AI Solution
or its usage apply to all factors relevant to the Parties (including in the
Instructions of Use and/or any obligations/restrictions applicable to GPAI
Model).

Note that the provider /deployer may wish to release subsequent versions or
updates to the AI system, in which case this drafting will prevent these
versions/updates from changing the status of the AI Solution (e.g., to HRAIS
under the EU AI Act), unless the Parties expressly agree otherwise.

The Parties may wish to consider provisions around other changes to the AI
system, i.e., those which don’t affect or alter the status of the AI system,
including under the EU AI Act (e.g., a notification obligation on the part of
the Provider).

The drafting at section 14 (in particular relating to Regulatory Classification
Events) addresses where a regulator decides that the AI Solution falls within
the scope of the EU AI Act.

CHANGE IN CLASSIFICATION OF GPAI MODELS TO INCLUDE SYSTEMIC RISK UNDER THE EU AI
ACT

The Deployer should be informed of any change to the classification of the
technology, but downstream entities may seek more granular assurances where a
GPAI Model becomes a GPAI Model with systemic risk.

If the AI Solution is a GPAI Model (but not a GPAI Model with systemic risk),
the Provider must notify the Deployer of any change to that classification.

Drafting


If the AI Solution meets the requirements to be classified as a GPAI Model with
systemic risk during the term of this Agreement, the Provider will inform the
Deployer within [14] days after [becoming aware that] the requirement is met.

The Provider must notify the European Commission if their GPAI Model meets the
threshold to be classified as a 'GPAI Model with systemic risk' within two
weeks.

As such, it is unlikely that the Provider will agree to provide the Deployer
with notice before that time.

The Deployer may wish to seek assurances that the Provider will inform the
European Commission that the GPAI Model has reached the threshold to be
classified as a GPAI Model with systemic risk.


USE / PURPOSE

INTENDED PURPOSE UNDER THE EU AI ACT

Given the EU AI Act’s approach to risk categorisation and the obligations placed
on providers, deployers and users of AI systems, it is important that both
Parties are clear on the intended purpose of the AI system and that this is
documented in the contract to support the Parties’ classification of whether an
AI system is or is not high risk.

Drafting


(a) The Parties acknowledge that the AI Solution is made available by the
Provider to the Deployer for the sole purpose of [insert intended use case(s)]
and that the Deployer may not use the AI Solution for any other purpose without
the prior written consent of the Provider.

(b) To the extent that the Deployer is authorised to make available the AI
Solution to any third parties under and in accordance with the terms and
conditions of this Agreement, the Deployer represents and warrants to the
Provider that it shall ensure the use of the AI Solution by such third parties
conforms to the purpose set out in clause (b).

The Parties must ensure that use of the AI Solution is closely monitored and
tightly controlled, including any downstream use, and that Art 5 of the EU AI
Act (prohibited AI practices) is considered whenever licensing or sub-licensing
the AI Solution.

Consider listing out prohibited uses explicitly to align with Art 5 and/or
deeming any use outside of the purpose defined in the [Intended Purpose] section
as misuse and material breach.

Note that any AI systems classified as high-risk under the EU AI Act are subject
to the majority of the obligations under the EU AI Act, so it is in the Parties’
interest to accurately identify and agree the category of AI system, especially
if it is not deemed high risk (in which case less prescriptive obligations apply
under the EU AI Act), notwithstanding that this may not influence or align with
a regulator’s classification.

PROHIBITED AI UNDER THE EU AI ACT

Given the EU AI Act specifies AI systems which are prohibited in the EU, it
might be worth including appropriate clauses to ensure the contract makes it
clear that the AI Solution cannot and should not be used for any of the uses
prohibited under the EU AI Act.

Drafting


The Deployer shall not use, or facilitate or allow others to use, the AI
Solution for practices prohibited under the EU AI Act [or The Provider’s
Policies].

This document focuses on prohibited use under the EU AI Act. There may be other
prohibitions in the jurisdictions where the AI Solution is used, which the
Parties should consider and address.

In particular, larger suppliers will likely have their own acceptable use or
responsible AI policies. It remains to be seen how these will interact with the
Instructions for Use. While it is not unusual for contracts to provide
compliance with these policies, there is a danger that they cut across other
contractual provisions and may be revised without a counterparty’s involvement.
The ultimate contractual position will depend on the power balance between the
Parties.

'Policies' in the draft clause would need to be defined.

Given the potential impact of prohibited use, the Provider is likely to seek to
include breach of this clause as a Suspension and Termination Event.

HIGH-RISK AI

Although the EU AI Act places numerous obligations on both Parties for HRAIS,
the Provider, in particular, will want to ensure that the intended use of the AI
Solution does not cause it to be deemed a HRAIS.

Providers will want to ensure that their non-HRAIS is not used for a high-risk
use set out in Annex I or Annex III.

If the Parties intend for the deployer to use the AI Solution for an Annex III
high-risk use, then the deployer must comply with the requirements under Art
6(3) to ensure the AI system is not deemed a HRAIS.

The requirements of Art 6(3) have been listed out in general form here but the
provider would benefit from tailoring these to limit the deployer to the actual
ground(s) relied on and specifying the compliant intended use(s).

Where the AI system is a HRAIS, the deployer must comply with their Art 26
obligations to use and monitor the AI system in accordance with the Art 13
instructions for use from the “provider”.

Non-HRAIS


The Parties agree that the AI Solution is not intended to be deployed as a
HRAIS.

The Deployer shall not use the AI Solution for any purpose that may cause the AI
Solution to be deemed a HRAIS.

If the Deployer causes the AI Solution to be deemed a HRAIS, the Deployer shall
be the provider of the AI Solution.

If a Regulatory Classification Event occurs, to the extent permitted under the
EU AI Act:

(i) the Party that first identifies the Regulatory Classification Event
occurrence shall promptly notify the other Party of the occurrence of such
event, and the Parties shall promptly discuss the impact of the event in
accordance with the Change Control Procedure; and

(ii) unless otherwise agreed, the cost of implementing any changes that result
from the Regulatory Classification Event shall be borne by [The Provider] OR
[The Deployer].

Annex III Use


The Parties agree that the AI Solution shall be used for [insert use], which
falls under Annex III of the EU AI Act.

The Parties agree that the AI Solution is not intended as a HRAIS on the basis
that the Deployer shall not use the AI Solution in a way that poses a
significant risk of harm to the health, safety or fundamental rights of natural
persons.

The Deployer shall only use the AI Solution for [insert use] which falls into
the following exemption under Art 6(3) of the EU AI Act:

[(i) perform a narrow procedural task;]

[(ii) improve the result of a previously completed human activity;]

[(iii) detect decision-making patterns or deviations from prior decision-making
patterns;]

[(iv) perform a preparatory task to an assessment relevant for the purposes of
the Annex III use specified in clause 2.4 above]

perform any other tasks, provided that the Deployer complies, at all times, with
the [High-risk AI] clause.

The Deployer shall not use the AI Solution to:

(i) replace or influence the previously completed human assessment, without
proper human review; or

(ii) perform profiling of natural persons.

If the Deployer causes the AI Solution to be deemed a HRAIS, or is otherwise
deemed to be a provider by a regulatory authority, the Deployer shall be the
provider of the AI Solution.

HRAIS / the Provider remains Provider


The Parties acknowledge that the AI Solution is a HRAIS, and that, the Provider
shall be the Provider of the AI Solution.

The Deployer will not:

(i) put its name or trademark on the AI Solution, except as expressly permitted
under this Agreement;

(ii) make a Substantial Modification to the AI Solution that maintains the
high-risk nature of the AI Solution as defined in the EU AI Act.

Where the AI Solution is intended to be used as a safety component for a product
covered by Annex I Section A legislation, the Deployer shall not:

(i) place the AI Solution on the market with the product under the Deployer’s
name or trademark;

(ii) put the AI Solution into service under the Deployer’s name or trademark
after the product has been placed on the market.

HRAIS / the Deployer as Provider


The Parties acknowledge that the AI Solution is a HRAIS, and that the Deployer
shall be the Provider of the AI Solution.

The Deployer shall, and the Provider shall cooperate by providing necessary
information and other reasonable assistance to enable the Deployer to, fulfil
the obligations of a Provider of a HRAIS under the EU AI Act.

A HRAIS can be captured under either Annex I, which pertains to EU safety
legislation (including, but not limited to, machinery, medical devices,
vehicles, systems and equipment), or Annex III, which pertains to other matters
of public interest (including, but not limited to, biometrics, education, and
law enforcement).

If an Annex III high-risk use is intended, then the AI system must be used more
restrictively to not be deemed a HRAIS. This includes: performing a narrow task;
improving the result of previously performed human activity; not being used to
replace human decision-making without human review; performing a preparatory
task for an Annex III related assessment. The AI system must not profile natural
persons.

Where the AI Solution is not a HRAIS the deployer then using it as a HRAIS will
cause the deployer to be the deemed provider under Art 25(1)(c) of the EU AI
Act.

If the AI Solution is a HRAIS from the outset, both Parties will be subject to
further obligations. The Parties will want further explicit drafting to clarify
who is the provider for a HRAIS and whether the provider needs to co-operate
with the deployer-deemed-provider to discharge their obligations. If the
provider has specified that the AI Solution should not be made into a HRAIS,
they will not be obligated to hand over necessary documentation to the
deemed-provider.

Where the deployer is not a provider, the deployer must ensure they use the
HRAIS in accordance with the instructions for use for the HRAIS (Art 26(1)) and
assign suitably qualified, capable and supported natural persons to oversee the
HRAIS (Art 26(2)).

'Regulatory Classification Event' would need to be defined in the contract.

MISUSE UNDER THE EU AI ACT

Providers of HRAIS are obliged to produce risk management systems and provide
instructions of use.

Parties may seek to: address potential misuse in their contracts with reference
to these obligations; and their respective responsibilities in relation to its
prevention.

Drafting


(a) the Provider shall provide the Deployer with instructions for use which
conform to Art 13 of the EU AI Act (the “Instructions for Use”). Without
limitation, these Instructions for Use shall specifically: (i) identify any
known or reasonably foreseeable conditions of misuse; and (ii) provide
information to enable the Deployer to use the AI Solution appropriately. [Such
Instructions of Use shall specifically identify reasonable steps which The
Deployer may take to prevent any misuse.]

(b)the Provider shall design, develop and document, operate and monitor the AI
Solution over its entire lifecycle to mitigate the risk of known and reasonably
foreseeable conditions of misuse. This shall include (but not be limited) to the
matters in [Appendix].

(c) the Deployer shall [exercise reasonable care and skill to] comply with the
Instructions for Use.

[(d) The Provider’s inclusion of a 'reasonable step' in the Instructions of Use
is not determinative of it being such a step. However, its inclusion (together
with any communications between the Parties about it) will be a factor in
considering reasonable care and skill.]

(e) Without limiting any other defences which may be available, the Deployer
shall not be liable for any non-compliance under clause (c) to the extent that
the non-compliance is caused (or contributed to) by the Provider’s failure to
comply with Art 13 of the EU AI Act [and clauses (c), [Transparency and
Provision of Information to Deployers] and [Accuracy, Robustness and
Cybersecurity] in respect of the content of the Instructions of Use] (and
nothing in this clause shall transfer risk or responsibility for the same).

Art 13 of the EU AI Act refers to “conditions of misuse” which appears to focus
on the circumstances that may give rise to misuse, rather than misuse itself. It
also does not expressly require providers to identify 'reasonable steps to
prevent any misuse', although that may be implicit in the requirement (at Art
13.3(b)(vii)) to include 'where applicable, information to enable deployers to
interpret the output of the HRAIS and use it appropriately'. The last sentence
of clause (b) is intended to close any potential gap and to make the wider
contractual scheme more workable.

Sub-clause (c) intended to address a circumstance where the consequent
“reasonable steps” in the Instructions of Use are more onerous than appropriate.
This is at the expense of certainty as to what the Deployer needs to do.

The square bracketed text in the first clause offers a way of placing the onus
on the Provider to identify misuse, which – in turn – the Deployer must seek to
prevent.

See further drafting in [Transparency and Provision of Information to Deployers]
below in relation to Instructions for Use.

SERIOUS INCIDENTS UNDER THE EU AI ACT

Under the EU AI Act, the Provider is required to report 'serious incidents' in
relation to a HRAIS.

The Provider may want redress the balance contractually, by obliging the
Deployer: (a) to avoid “serious incidents”; and (b) to report on them between
the Parties; and to cooperate in their investigation and corrective actions.

The Deployer may also want prompt receipt of any reports under Art 73.

Given “serious incidents” are broadly defined under the EU AI Act, the Parties
also may wish to tailor it to their use case.

Drafting


A 'Serious Incident' means an [actual or reasonably suspected] incident or
malfunctioning of the AI Solution that directly or indirectly leads to any of
the following:

(a) death or serious harm to a person’s health;

(b) a serious and irreversible disruption of the management or operation of
critical infrastructure;

(c) infringement of obligations under EU law intended to protect fundamental
rights;

(d) serious harm to property or the environment.

The Parties agree that Serious Incidents shall include (but not be limited to):
[list matters specific to the use case].

The Deployer shall take reasonable steps to mitigate the risk of Serious
Incidents. However, without limiting any other defences which may be available,
the Deployer shall not be liable for any Serious Incident:

(a) unless it is caused by the negligence of the Deployer or a failure to
exercise reasonable care and skill in the performance of its obligations under
this Agreement; and

(b) to the extent that the Serious Incident is [caused] OR [contributed to] OR
[primarily attributable to] by the Provider’s failure to comply with the EU AI
Act or this Agreement, including (but not limited to) deficiencies in the
Instructions for Use and Risk Management System (and nothing in clauses [this
clause, and the next two clauses] shall transfer risk or responsibility for the
same).

If either Party: (a) establishes a causal link between the AI Solution and a
Serious Incident; or (b) the reasonable likelihood of such a link; then it shall
notify the other Party of the Serious Incident as soon as possible.

In the event of a notification under clause [previous clause], the Parties shall
cooperate in the performance of necessary investigations, reporting and
corrective actions in relation to the Serious Incident and the AI Solution.

The Provider, where it acts as a Provider of the AI Solution, shall promptly
provide the Deployer with copies of any reports to market surveillance
authorities under Art 73 of the EU AI Act in connection with the Deployer’s
(actual or potential) use or misuse of the AI Solution.

Although the EU AI Act’s 'serious incidents' regime is confined to HRAIS,
Parties may still wish to adopt it in their agreements, but this may have a
significant time or cost impact.

The definition of “serious incident” in the EU AI Act does not extend to cover
“reasonably suspected” incidents. However, depending on the use case, this
greater sensitivity may be attractive.

The drafting in clause 'the Deployer shall take reasonable steps to mitigate the
risk...' presumes that the Deployer has a limited, primarily supportive, role in
mitigating Serious Incidents. Consequently, the Deployer should not be held
liable for Serious Incidents where it has exercised reasonable care and skill,
or where the Provider is at fault. Parties to consider the specific nature of
the Serious Incidents and the AI Solution in question as well as the degree to
which the Provider relies on the Deployer in performance of its obligations
under the Agreement. Parties should also consider these clauses in the context
of any existing provisions within the Agreement that address when the Provider
may be entitled to relief from liability so as to ensure appropriate allocation
of responsibilities and liabilities between the Parties.

GENERAL PURPOSE AI UNDER THE EU AI ACT

A downstream provider /integrator of a GPAI Model will expect promises from the
upstream GPAI Model provider that the GPAI Model provider will provide the
transparency information required under Art 53 (1)(b) and Annex XII, and will do
so in accordance with the GPAI Model Codes of Practice that are to be published
in April 2025.

Drafting


Subject to clause [next clause], the GPAI Provider shall:

(a) provide or make available the GPAI Model Transparency Information to the
Downstream Provider within five days of a written request for the same from the
Downstream Provider to the GPAI Provider;

(b) keep the GPAI Model Transparency Information up-to-date.

(c) ensure that both: (i) the GPAI Model Transparency Information; and (ii) the
GPAI Model’s provision or making available of the GPAI Model Transparency
Information to the Downstream Provider, [each comply with] OR [are in accordance
with] the requirements [and recommendations of] the GPAI Model Code(s) of
Practice.

Other than in respect of GPAI Models with Systemic Risks, the obligations on the
GPAI Provider set out in clause (previous clause) shall not apply in respect of
any GPAI Model released by the GPAI Provider under a free and open-source
licence that allows for the access, usage, modification, and distribution of
that GPAI Model, and whose parameters, including the weights, the information on
the GPAI Model architecture, and the information on the GPAI Model usage, are
made publicly available.

The Downstream Provider shall comply with the terms of the licence and
acceptable usage policies referred to in the GPAI Model Transparency
Information.

These provisions will be updated once the GPAI Model Codes of Practice are
published. Publication is planned for April 2025.

Clause (c) refers to the GPAI Model Codes of Practice as establishing the rules
for compliance with Art 53 and Annex XII. Art 53 (4) refers to the GPAI Model
providers being able to rely on the Codes “until a harmonised standard is
published”. It then goes onto refer to a GPAI Model provider that doesn’t comply
with a Code or a technical standard, as having an additional right to
“demonstrate alternative adequate means of compliance for assessment by the
Commission”.

Given the amount of effort that the AI Office appears (as of Q4 2024) to be
putting into the consultation and drafting of the GPAI Model Codes of Practice,
we assume it is likely the Codes will be the primary point of reference for
compliance with Art 53 in practice. As such clause (c) does not refer to
technical standards or any alternative means of compliance. This should be kept
under review.

SYSTEMIC RISK UNDER THE EU AI ACT

The additional obligations from the EU AI Act on GPAI Models with systemic risk
are placed solely on the Provider.

Thus, where a GPAI Model without systemic risk is being provided, the Provider
must ensure the Deployer does not cause the GPAI Model to be designated as
posing a systemic risk.

The Provider must ensure that the GPAI is not used or developed in any way that
would meet the threshold for having high impact capabilities (objectively by
benchmarking or more subjectively by decision by the European Commission).

Specifically, GPAI Models will be assumed to have systemic risk where trained
using more than 10^25 FLOPs of computation, which must be explicitly forbidden
in the contract.

Under Annex XIII, the GPAI Model will be deemed to have a high impact on the
internal market if it is available to more than 10,000 registered business
users. This will not be the only factor in a decision but is the only threshold
that can be contracted for to mitigate the risk of a decision by the European
Commission.

More bespoke drafting may be necessary to curtail the Deployer use or
experimentation with the GPAI Model to ensure that the threshold is not met,
especially if a novel use or capability could arise which would more easily meet
the threshold, but this will need to be considered on a case-by-case basis.

If the AI Solution is a GPAI Model with systemic risk, downstream entities will
also want assurances that the Provider has complied with obligations for GPAI
Models with systemic risk in Art 55.

No Systemic Risk


The Deployer shall not use, fine-tune, train, or otherwise develop the AI
Solution in such a way that the AI Solution may be at risk of being considered
as having high impact capabilities as defined in the EU AI Act.

The Deployer’s use of the AI Solution shall be limited to no more than 10,000
registered business users in the EU, unless agreed otherwise in writing.

The Deployer shall not train the AI Solution using a computation of greater than
10^25 FLOPs, or such other degree of training computation that exceeds the
presumption in Art 51(2) of the EU AI Act, as amended from time to time.

Systemic Risk


The Provider represents and warrants that it has complied and will comply with
the requirements set out in Art 55 of the EU AI Act and will provide
confirmation of such compliance upon request from the Deployer.

Systemic risk is defined under Art 3(65) as being specific to a GPAI Model’s
potential broad-reaching impact on the EU market or on matters of public
interest.

A GPAI Model will be classified as having systemic risk if it meets the
requirements in Art 51(1), i.e. having high impact capabilities objectively, or
deemed to have such capabilities or equivalent impact based on a decision of the
European Commission, judged against the Annex XIII criteria, pertaining to the
size, complexity and reach of the GPAI Model.

High impact capabilities are defined broadly as matching or exceeding the
current capabilities of the most advanced GPAI Models. This is a high, general
bar but the Deployer will want to push the capabilities of the GPAI Model being
contracted for. However, there is the fixed limit on training computation under
Art 51(2) to provide some reassurance on the limits of what the Deployer can be
permitted to do.


COMPLIANCE

AI LITERACY UNDER THE EU AI ACT

Providers (the Provider) and deployers (the Deployer) are required to take
measures to ensure to their 'best extent' that all staff and other persons
dealing with the AI Solution are well-educated about it. The obligation includes
taking into account their technical knowledge, experience, education and
training and the context the AI Solution is to be used in and considering the
persons or groups of persons on whom the AI Solution is to be used.

Parties are therefore likely to seek assurances that the Art 4 obligations have
been/will be complied with.

Drafting


The Provider warrants that all appropriate and necessary measures have been
taken to ensure, to its best extent, that all its staff [and other persons]
involved in the creation, development and provision of the AI Solution have a
sufficient level of AI literacy in accordance with the requirements of Art 4 of
the EU AI Act.

The Deployer shall take all appropriate and necessary measures to ensure, to its
best extent, that all its staff [and other persons] involved in the deployment
and use of the AI Solution have a sufficient level of AI literacy in accordance
with the requirements of Art 4 of the EU AI Act.

The obligation on the Provider could be included within a more general
obligation to ensure that the Provider’s personnel have all necessary
professional skill and expertise to provide the AI Solution.

RISK MANAGEMENT SYSTEM UNDER THE EU AI ACT

Deployers of HRAIS are likely to seek assurances from providers that their
obligations under Art 9 have all been, and will continue to be, met.

Provider Warranties


The Provider warrants that:

(a) it has established, implemented and documented a risk management system
meeting the requirements of Art 9 EU AI Act (the 'Risk Management System'); and

(b) it shall maintain and keep the Risk Management System under regular
systematic review and updated at all times throughout the entire lifecycle of
the AI Solution.

Provider Assistance


The Provider shall provide to the Deployer all information and assistance
reasonably required for the Deployer to establish, implement and document a risk
management system meeting the requirements of Art 9 of the EU AI Act (the 'Risk
Management System').

The Provider warrants that before delivery of the AI Solution it has tested the
AI Solution to:

(a) identify the most appropriate and targeted risk management measures; and

(b) ensure that the AI Solution performs consistently for its intended purpose
and complies with the requirements of Chapter III, Section 2 of the EU AI Act.

DATA AND DATA GOVERNANCE UNDER THE EU AI ACT

The Deployer is likely to seek assurances from the Provider that the Provider
has complied with all its obligations under Art 10 to develop the AI Solution
using high-quality data sets for training, validation and testing.

Where AI solution to be trained


The Provider warrants that the AI Solution has been developed on the basis of
training, validation and testing data sets which meet the requirements of Art 10
of the EU AI Act.

The Provider warrants that:

(a) it shall process special categories of personal data only to the extent
strictly necessary for the purpose of ensuring bias detection and correction in
relation to the AI Solution in accordance with the requirements of Art 10 of the
EU AI Act, paragraph (2), points (f) and (g);

(b) it shall ensure appropriate safeguards for the fundamental rights and
freedoms of natural persons and in addition to the provisions set out in
Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 it
shall ensure that all of the conditions set out in Art 10 paragraph 5 are be met
in order for such processing to occur.

Where AI solution not to be trained


The Provider warrants that it has complied with all of the requirements of Art
10 of the EU AI Act in relation to the testing data sets.

In addition to warranties as to compliance with Art 10 requirements, and to the
extent this is not included in the information relating to the system, the
Deployer may wish to consider requirements for the Provider to supply
information and further assurances as to the nature and provenance of the data
sets used for training/testing of the system.

Consider whether there is a likelihood of processing special categories of
personal data. This will also need to be addressed in the related data
processing agreement.

Where the AI Solution has been developed without use of techniques involving the
training of AI models, Art 10 paragraphs 2 to 5 apply only to the testing data
sets.

TECHNICAL DOCUMENTATION UNDER THE EU AI ACT

The Deployer is likely to seek assurances from the Provider that the Provider
has complied with all of its obligations under Art 11 and will continue to be
met during the term of the agreement.

Drafting


The Provider warrants that it has complied with its obligations relating to
technical documentation in accordance with Art 11 of the EU AI Act and that it
shall keep the technical documentation up to date at all times. [Upon The
Deployer's request, The Provider shall make copies of all technical
documentation available to The Deployer.]

Consider an additional obligation on the Provider to make technical
documentation available to the Deployer. The interaction with broader
confidentiality obligations should also be considered.

RECORD KEEPING UNDER THE EU AI ACT

In order to ensure accountability and safety in the development of HRAIS, the
Deployer is likely to seek assurances from the Provider that the Provider has
complied with all of its obligations under Art 12 in relation to automatically
logging events and that this requirement will continue to be met during the term
of the agreement so that AI Solution’s actions can be traced back.

Drafting


The Provider warrants that the AI Solution shall technically allow for automatic
recording (logging) of events over the lifetime of the AI Solution in accordance
with the requirements of Art 12 of the EU AI Act.

If the log information is not readily accessible to the Deployer via the AI
Solution, the Deployer may also wish to include specific obligations on the
Provider to make logs available to the Deployer.

TRANSPARENCY AND PROVISION OF INFORMATION TO DEPLOYERS UNDER THE EU AI ACT

The Deployer is likely to seek assurances from the Provider that the Provider
has complied with all of its obligations under Art 13 and will continue to do so
during the term of the agreement.

Drafting


The Provider warrants that

(a) the AI Solution has been and will continue to be designed and developed in
such a way as to ensure that the operation of the AI Solution is sufficiently
transparent to enable the Deployer to interpret the AI Solution’s output and use
it appropriately.

(b) it shall ensure an appropriate type and degree of transparency with a view
to achieving compliance with the relevant obligations of both the Provider and
the Deployer set out in Section 3 of the EU AI Act throughout the lifetime of
the AI Solution.

(c) it shall provide to the Deployer instructions for use of the AI Solution
which shall include all of the information required by Art 13 of the EU AI Act
(the 'Instructions for Use').

(d) the Instructions for Use shall be made available in an appropriate digital
format or otherwise that include concise, complete, correct and clear
information that is relevant, accessible and comprehensible to the Deployer; and

(e) it shall keep the Instructions for Use under review and fully updated
throughout the lifetime of the AI Solution.

In relation to the Instructions for Use, see also the misuse clauses above
(section 15).

HUMAN OVERSIGHT UNDER THE EU AI ACT

The Deployer is likely to seek assurances from the Provider that the Provider
has complied with all of its obligations under Art 14 and that it will continue
to do so during the term of the agreement.

Drafting


The Provider warrants that:

(a) the AI Solution has been and will continue to be designed and developed in
such a way, including with appropriate human-machine interface tools, that it
can be effectively overseen by natural persons during the period in which the AI
Solution is in use.

(b) the oversight measures and tools shall meet the requirements of Art 14 of
the EU AI Act [and the AI Solution Requirements Specification].

The specific nature of human oversight measures will need to match the risks and
context of the AI Solution’s use. These measures could be built into the AI
Solution by the Provider or implemented by the Deployer. Where a bespoke AI
Solution is being developed, the Parties will need to set out the details of the
oversight measures in the AI Solution requirements specification.

Note that additional requirements apply in the case of high-risk systems
referenced in para 1(a) of Annex III (remote biometric identification systems
not including AI systems intended to be used for biometric verification the sole
purpose of which is to confirm that a specific natural person is the person he
or she claims to be). The additional requirement is that no action or decision
is taken by the Deployer on the basis of the identification resulting from the
system unless that identification has been separately verified and confirmed by
at least two natural persons with the necessary competence, training and
authority (Art 14 paragraph 5).

ACCURACY, ROBUSTNESS AND CYBERSECURITY UNDER THE EU AI ACT

The Deployer is likely to seek assurances from the Provider that the Provider
has complied with all of its obligations under Art 15 and that it will continue
to do so during the term of the agreement.

Drafting


The Provider warrants that

(a) the AI Solution has been and will continue to be designed and developed in
such a way that the AI Solution shall meet all of the requirements of Art 15 of
the EU AI Act throughout its lifecycle.

(b) the levels of accuracy and the relevant accuracy metrics are declared in the
accompanying Instructions of Use.

(c) it shall implement and maintain the measures set out in the Back-up Schedule
throughout the lifecycle of the AI Solution; and

(d) it shall implement and maintain the measures set out in the Cybersecurity
Schedule throughout the lifecycle of the AI Solution.

The European Commission will, in cooperation with relevant stakeholders and
organisations, encourage the development of benchmarks and measurement
methodologies. Where benchmarks and measurement methodologies have been
developed, the Parties may wish to refer to them specifically.

The Deployer may wish to consider what its appropriate remedies should be if the
levels of accuracy and relevant accuracy are not met.

Consider whether it is appropriate to include details of technical redundancy,
back-up plans or fail-safe plans. Similarly consider whether it is appropriate
to include details of cybersecurity measures.

OBLIGATIONS OF PROVIDERS OF HRAIS UNDER THE EU AI ACT

The Deployer is likely to seek assurances from the Provider that the Provider
has complied with all of its obligations under Art 16.

Drafting


The Provider warrants that it has complied and shall continue to comply with all
of its obligations set out in Art 16 of the EU AI Act [including, without
limitation, the specific elements set out in (Clause/Schedule The Provider’s
Obligations)].

Consider whether it is appropriate/necessary to describe in detail any of the
Provider’s obligations.

QUALITY MANAGEMENT SYSTEM, RECORD KEEPING AND AUTOMATICALLY GENERATED LOGS UNDER
THE EU AI ACT

The Deployer is likely to seek assurances from the Provider that the Provider
has complied with all of its obligations under Arts 17, Art 18 and Art 19.

Drafting


The Provider warrants that it has complied with and shall continue to comply
with the requirements of Arts 17, 18 and 19 of the EU AI Act throughout the
lifecycle of the AI Solution.

CORRECTIVE ACTIONS AND DUTY OF INFORMATION UNDER THE EU AI ACT

The Deployer is likely to seek assurances from the Provider that the Provider
has complied with all of its obligations under Art 20.

Drafting


The Provider warrants that where it considers or has reason to consider that the
AI Solution is not in conformity with the EU AI Act it shall comply with the
requirements of Art 20 of the EU AI Act. Without limitation to the foregoing,
the Provider shall immediately inform the Deployer accordingly.

The Deployer may also wish to include a specific additional obligation on the
Provider to investigate any concerns raised by the Deployer that the AI Solution
is not in conformity with the regulations and to take corrective action.

Art 20 requires the Deployer to 'take the necessary corrective actions to bring
that system into conformity, to withdraw it, to disable it, or to recall it, as
appropriate.' The Parties will need to consider, in the context of AI Solution,
its purpose and the circumstances, the various specific remedies which should be
available to the Deployer in these circumstances.

CONFORMITY ASSESSMENT, CERTIFICATES AND REGISTRATION UNDER THE EU AI ACT

The Deployer is likely to seek assurances from the Provider that the Provider
has complied with all of its obligations under Chapter III, Section 5 Arts 43,
44 and 49.

Drafting


The Provider warrants that it has complied with and shall continue to comply
with the requirements of Arts 43, 44 and 49 of the EU AI Act throughout the
lifecycle of the AI Solution.

Parties may wish to use this short-form clause if it is required or considered
more practical.

OBLIGATIONS OF PROVIDERS OF HRAIS UNDER THE EU AI ACT

An optional short-form version of the above clauses under which the Provider
warrants compliance with the EU AI Act.

Drafting


The Provider represents and warrants that it and the AI Solution shall at all
times comply with all obligations set out in the EU AI Act applicable to the
Provider.

Parties may wish to use this short-form clause if it is required or considered
more practical.

OBLIGATIONS OF DEPLOYERS OF HRAIS UNDER THE EU AI ACT

Providers (the Provider) of HRAIS are likely to seek assurances from the
Deployer that the Deployer has complied with all of its obligations under Art
26.

Drafting


The Deployer warrants that it has complied with all of its obligations set out
in Art 26 of the EU AI Act [including, without limitation, the specific elements
set out in (Clause/Appendix - The Deployer’s Obligations)].

Consider whether it is appropriate/necessary to describe in detail any of the
Deployer’s obligations.

POST-MARKET MONITORING UNDER THE EU AI ACT

The Provider is required to establish and document a post-market monitoring
system in a manner which is proportionate to the nature of the AI technologies
and the risks of the AI system.

The Provider may require the Deployer to co-operate as regards compliance with
the post-market monitoring, including the provision of data throughout the
lifetime of the AI system.

Drafting


The Deployer shall co-operate with the Provider including by allowing the
Provider to systematically collect, document and analyse relevant data to allow
the Provider to meet its obligations under Art 72 of the EU AI Act relating to
the continuous compliance of the AI Solution with the requirements of Chapter
III of the EU AI Act.

The Deployer may wish to consider including specific provisions which require
the Provider to keep such data in confidence and securely and to use it only for
the purpose of post-market monitoring.

OBLIGATIONS OF DEPLOYERS OF HRAIS UNDER THE EU AI ACT

An optional short-form version of the above clauses under which the Deployer
warrants compliance with the EU AI Act.

Drafting


The Deployer represents and warrants that it shall at all times comply with all
obligations set out in the EU AI Act applicable to the Deployer, including
without limitation in relation to its use of the AI Solution.

Parties may wish to use this short-form clause if it is required or considered
more practical.

FUNDAMENTAL RIGHTS IMPACT ASSESSMENTS UNDER THE EU AI ACT

Where the Deployer has to carry out a Fundamental Rights Impact Assessment
('FRIA'), this clause enables the Deployer to require the Provider to make a
FRIA previously carried out by the Provider available to the the Deployer.

Drafting


If the Deployer is required to carry out a FRIA under the EU AI Act, the
Provider shall, where requested by the Deployer, make available to the Deployer
any previous or existing FRIAs carried out by the Provider in relation to the AI
Solution to enable the Deployer to assess whether it can rely on such previous
or existing FRIA.

Consider whether a FRIA might be confidential and/or specific to the context in
which the AI Solution is deployed/used, and if it’s not possible to provide it,
the Parties should consider alternative obligations such as an obligation for
the Provider to carry out a new FRIA specific to the use case(s) for which it
will make the AI Solution available to the Deployer.

TRANSPARENCY OBLIGATIONS UNDER THE EU AI ACT

Where the AI Solution interacts with individuals directly or creates synthetic
content, certain transparency obligations will apply to the Provider and/or the
Deployer.

The Parties may wish to seek assurances that the transparency requirements for
the AI Solution have been met.

General Transparency Obligations


The Provider represents and warrants that it has complied with its transparency
obligations set out in Art 50 of the EU AI Act applicable to the Provider.

The Deployer represents and warrants that it has complied with its transparency
obligations set out in Art 50 of the EU AI Act applicable to the Deployer.

Interacting with Individuals


The Provider represents and warrants that it has designed and developed the AI
Solution so that individuals are informed that they are interacting with AI
Solution [as required by Art 50(1) of the EU AI Act].

Marking Outputs


The Provider represents and warrants that the outputs of the AI Solution are
marked in a machine-readable format and detectable as artificially generated or
manipulated [as required by Art 50(2) of the EU AI Act].

Emotion Recognition and Biometric Categorisation Systems


The Deployer represents and warrants that where the AI Solution is a relevant
emotion recognition system or a biometric categorisation system the AI Solution
informs individuals exposed to the AI Solution of the operation of the system
[as required by Art 50(3) of the EU AI Act].

Deep fakes / content to inform the public on matters of public interest


The Deployer represents and warrants that where the AI Solution generates or
manipulates content, the Deployer discloses that the content has been
artificially generated or manipulated [as required by Art 50(4) of the EU AI
Act].

PROCEDURE UNDER THE EU AI ACT

Drafting


The Provider represents and warrants that it has followed the procedure set out
in Art 52 of the EU AI Act.

This may be covered by general compliance with the law, but it might also be
worth considering obligations around notification and disclosure if Art 52 is
triggered, and any necessary recall/remedies if the Provider has to require the
Deployer to stop using the AI Solution for future non-compliance.

BREACH UNDER THE EU AI ACT

Consider contractual provisions dealing with a scenario in which the Provider
has been found in breach or if ordered to make changes to AI systems or take
other steps.

Drafting


In the event that the Provider becomes aware that the AI Solution has been found
to present a risk as defined under Art 82(1) of the EU AI Act, the Provider will
inform the Deployer without unreasonable delay, and such notification will
include information on the nature of the Art 79 evaluation and the measures
which are to be, or have been, taken to address the identified risks.

COMPLAINTS UNDER THE EU AI ACT

The Provider or the Deployer may want protection before the other files a
complaint under Art 85 (e.g., prior notification, opportunity to remediate,
cooperation in any subsequent investigation).

Drafting


In the event that a the Deployerecomes aware that the AI Solution is the subject
of a complaint filed under Art 85 of the EU AI Act (an 'Art 85 Complaint'), such
Party will inform the other Party of the nature of the Art 85 Complaint and
provide reasonable assistance in any subsequent investigation.

Prior to either Party filing an Art 85 Complaint in respect of the AI Solution,
the Parties will follow the dispute resolution procedures set out in [reference
to dispute resolution clause if applicable].

THIRD PARTY SUPPLIERS TO PROVIDERS UNDER THE EU AI ACT

The Provider should agree by written agreement with its supplier (other than a
supplier under a free and open-source licence who is not providing a GPAI Model)
whose components or processes are incorporated in the Provider’s AI system.

Drafting


The Parties each warrant and represent that they have obtained (and will
continue to obtain) written agreement with suppliers of components, processes,
or services that are incorporated into or used for the development of the AI
Solution (each a 'Supplier') that:

(a) the Supplier acknowledges that the relevant services are being provided by
it in relation to AI Solution regulated under the EU AI Act; and

(b) the Supplier shall provide the relevant contracting Party with all
information, capabilities, technical access, and other assistance in accordance
with Best Industry Practice, which is reasonably required by the relevant
contracting Party to enable it to fully comply with the EU AI Act.

Although the EU AI Act refers to a provider requiring such assistance, we
consider this should be extended to deployers who engage third parties (in
addition to the provider) to provide other tools, processes, components, and
services in relation to the AI system.

Recital 88 of the EU AI Act provides insight into what type of services are
relevant, including model training, model retraining, model testing and
evaluation, integration into software, or other aspects of model development.

This does not apply to suppliers providing such items under a free and
open-source licence unless it is a GPAI Model.

Note the EU AI Act states that the AI Office may (so not a mandatory
requirement) develop and recommend voluntary model terms covering this area.

IPR INFRINGEMENT INDEMNITY FOR AI SYSTEMS AND GPAI MODELS

GPAI Models and AI systems may give rise to IPR infringement claims against the
Provider and the Deployer, for which protection should be sought in the
agreement.

Drafting


The Provider shall indemnify and keep the Deployer fully and effectively
indemnified on demand against all Losses of whatsoever nature arising out of or
in connection with any claim that the receipt or use of the AI Solution or its
outputs as permitted by this Agreement, infringes the Intellectual Property
Rights of a third party.

The Deployer shall indemnify and keep the Provider fully and effectively
indemnified on demand against all Losses of whatsoever nature arising out of or
in connection with any claim that the AI Solution or its outputs infringes the
Intellectual Property Rights of a third party to the extent that such claim
arises as a result of the use of the AI Solution or its outputs by the Deployer
in a manner not permitted under this Agreement or the AI Solution’s
specifications.

This is an optional and short-form clause where the contract does not contain a
suitable IPR infringement indemnity whose scope covers the relevant scenario.
The reference to AI Solution should cover scenarios involving both GPAI Models
and AI systems.

Larger service providers may include non-negotiable conditions (e.g., strict
notice requirements) that apply before a Party can rely on an indemnity. The
indemnity may therefore need to be tailored depending on the nature/bargaining
power of the Parties, the AI Solution as well as other factors.

'Losses' and 'Intellectual Property Rights' will need to be defined in the
agreement to reflect the negotiated risk allocation.


TERMINATION AND SUSPENSION

SUSPENSION EVENTS UNDER THE EU AI ACT

Non-compliance with the EU AI Act may be so serious as to justify suspension or
termination.

Drafting


If [The Provider reasonably determines that] a the Provider Suspension Event has
arisen, the Provider may, on provision of written notice to the Deployer [in
accordance with clause [XX]] immediately suspend the Deployer’s (and/or any of
its end users’) access to or use of any portion or all of the AI Solution.

If [The Deployer reasonably determines that] a the Deployer Suspension Event has
arisen, the Deployer may, on provision of written notice to the Provider [in
accordance with clause [XX]] immediately suspend [insert relevant obligations].

The Parties will want to consider how the clauses interact with their
termination and suspension regimes. This draft clause should be considered
alongside those but seeks to identify which of the EU AI Act related events are
likely to warrant attention by the Parties.

Suspension and termination events (e.g., prohibited use, misrepresented
classifications) will need to be defined in the contract and might include,
e.g., prohibited use. They may be intensively negotiated, given the potential
ramifications for each of the Parties.

It is assumed that the Parties’ contract will contain a clause governing the
service of notices which may be cross referred to in this clause.

TERMINATION EVENTS UNDER THE EU AI ACT

Drafting


In the event that a the Provider Suspension Event: (i) has not been remedied [to
The Provider’s reasonable satisfaction] within 30 days of the notice provided
pursuant to [Suspension Events]; or (ii) is incapable of remedy, the Provider
shall be entitled, on provision of written notice [in accordance with clause
[XX]] to the Deployer, to terminate this Agreement with immediate effect.

In the event that a the Deployer Suspension Event: (i) has not been remedied [to
The Deployer’s reasonable satisfaction] within 30 days of the notice provided
pursuant to [Suspension Events]; or (ii) is incapable of remedy, the Deployer
shall be entitled, on provision of written notice [in accordance with clause
[XX]] to the Provider, to terminate this Agreement with immediate effect.

The Parties will want to consider how the clauses interact with their
termination and suspension regimes. This draft clause should be considered
alongside those but seeks to identify which of the EU AI Act related events are
likely to warrant attention by the Parties.

Suspension and termination events (e.g., prohibited use, misrepresented
classifications) will need to be defined in the contract and might include,
e.g., prohibited use. They may be intensively negotiated, given the potential
ramifications for each of the Parties.

It is assumed that the Parties’ contract will contain a clause governing the
service of notices which may be cross referred to in this clause.

Disclaimer License About User Guide

DISCLAIMER

The sample clauses provided are for guidance purposes only and do not constitute
legal advice.

I accept no liability or responsibility for any reliance placed on these
materials or their use in contractual agreements.

Independent legal advice should be sought to ensure compliance with the EU AI
Act and other relevant legislation.

I am a lawyer, but I am not your lawyer.

LICENSE

This project is made available on an as-is and as-available basis, for
commercial and non-commercial use.

This tool and its source code is only to be used at this website.

The clauses are derived from the SCL AI Act clauses published by the Society for
Computers and Law (SCL) AI Group.

The SCL AI Act clauses are licensed under the Creative Commons Attribution 4.0
International License. To view a copy of this license, visit
http://creativecommons.org/licenses/by/4.0/.

For the original clause text material, please visit: SCL AI Group EU AI Act
Contractual Clauses

ABOUT

This tool was developed by Rich Folsom to assist with drafting AI-related
contractual clauses in light of the EU AI Act.

They are drafted with English law in mind, but could be adapted to other
jurisdictions.

The clauses are based on the AI Act clauses published by the Society for
Computers and Law (SCL) AI Group. You can find more information about the SCL AI
Group here.

The source material for these clauses can be found here.

This tool uses the version of the SCL AI Act clauses published on 13 October
2024.

USER GUIDE

CUSTOMIZING CLAUSES

1. Use the input fields on the left to modify variables that appear throughout
the clauses.

2. Within each clause, you'll see highlighted options. Click on an option to
select it. The selected option will be bold and remain highlighted, while
unselected options will be grayed out.

COPYING CLAUSES

Each clause has one or two clipboard buttons:

 * This button copies the entire clause, including all options.
 * This button appears after you've made a selection and copies only the
   selected options.

After clicking a clipboard button, you'll see a checkmark briefly to confirm the
text has been copied.

ASSUMPTIONS AND SCENARIOS

The drafting of these clauses assumes:

 * Two commercial parties (the "Parties") contracting under English law.
 * The "Provider" under the EU AI Act is a party to the contract.

However, there may be scenarios where these assumptions don't apply:

 * In systems integration contracts, the Provider might be passing through
   commitments from a third-party AI system provider. In this case, the Provider
   may limit their responsibility to what the third-party has committed to.
 * The Deployer might have licensed the AI system, which the Provider will be
   integrating. In this scenario, the Deployer would pass through terms required
   for compliance with the AI system license.

In such cases, the clauses may need to be adapted to address these different
relationships and responsibilities.

TIPS

- You can change your selection at any time by clicking on a different option.

- The tool updates in real-time as you make changes, allowing you to see the
final clause immediately.

- Consider the specific scenario of your contract and adapt the clauses as
necessary to reflect the actual relationships and responsibilities of the
parties involved.


Language 1
Language 2
Language 3
Privacy on this site

We collect and process your data on this site to better understand how it is
used. You can give your consent to all or selected purposes, or you can decline
them all. For more information, see our privacy policy.

Analytics
Accept allDecline all
Consent detailsHide consent detailsPrivacy policy
AnalyticsWe'll collect information about your visit to our site. It helps us
understand how the site is used – what's working, what might be broken and what
we should improve.
Save and close
Piwik PROPowered by