Neale Welch

Interpretive AI Advisory

Example Analysis

Draft EU AI Act — Human Oversight (Art. 14)

When consultation language becomes operational meaning

The following example illustrates how evaluative regulatory language may behave once it is routinely summarised, paraphrased, and relied upon within AI-mediated environments.

The purpose is not to interpret Article 14 authoritatively, nor to assess compliance with it. It is to examine how flexible legislative drafting can begin to function as settled operational requirement once compressed and recirculated through institutional systems.

What follows focuses on where meaning is structurally likely to settle before formal clarification occurs.

Source text (quoted)

The full text of Regulation (EU) 2024/1689 (Artificial Intelligence Act), including Article 14, is available on EUR-Lex

1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.

2. Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section.

3. The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures:

(a) measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;

(b) measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer.

4. For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate:

(a) to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation…

(b) to remain aware of the possible tendency of automatically relying or over-relying on the output… (automation bias)…

(c) to correctly interpret the high-risk AI system’s output…

(d) to decide… not to use… or… override or reverse the output…

(e) to intervene… or interrupt the system through a ‘stop’ button…

Where meaning is likely to settle (without being consciously fixed)

1) “Effectively overseen”

This phrase reads as if it is concrete, but it is structurally elastic. In practice it can settle into at least three incompatible meanings, depending on who is relying on it:

Once one of these becomes the repeated interpretation in summaries, internal policies, vendor questionnaires, or procurement notes, it begins to behave like the operative requirement.

2) “Aim to prevent or minimise”

“Aim” is not a number, but neither is it empty. Under AI mediation this language often stabilises as a binary:

Neither reading is inherently incorrect. The issue arises when one reading becomes the relied-upon meaning through repetition.

3) “Commensurate”, “appropriate”, “proportionate”

These are consultation-shaped words. They allow the text to travel across contexts.

Under repeated AI summarisation, they frequently collapse into a generalised formulation such as “human oversight proportional to risk,” which appears resolved while leaving unanswered what proportionality requires in a particular operational setting.

4) The list of enabled capacities

Paragraph 4 appears more concrete, yet each item contains its own interpretive hinge:

These hinges are the points at which institutional reliance converts flexible drafting into settled operational meaning.

Plausible AI-mediated compressions

(These are representative compressions commonly observed when regulatory text is summarised for operational use.)

Compression A — compliance summary form

“High-risk AI must have human oversight with measures proportional to risk, enabling humans to understand the system, monitor it, interpret outputs, override decisions, and stop the system.”

Compression B — procurement / vendor questionnaire form

“Vendor confirms human oversight: monitoring, explainability, override, and stop control are available.”

Compression C — internal policy form

“Use high-risk AI only with human review and the ability to intervene.”

Each compression is reasonable. Each also quietly fixes meaning by selecting which elements become treated as the requirement.

Once one compressed form becomes the repeated reference point, it begins to structure behaviour even though the originating clause remains formally unchanged.

Why interpretive authority becomes unavoidable here

This is not about whether Article 14 is good policy.

It concerns what occurs when consultation-shaped language becomes operational through repetition and reliance.

A regulator may later ask what “effective oversight” meant in a deployment. A supplier may equate “human oversight” with the existence of an interface. An institution may treat “override” as an exceptional fallback rather than an ongoing control. An internal policy may cite a compressed summary as though it were the full text.

At that stage interpretation has already shaped action.

Interpretive authority becomes unavoidable when an organisation must determine which meaning is actually being produced and relied upon in its environment, and when leaving that meaning unexamined is itself a consequential position.