Context

For mid-career professionals in risk, audit, or management, the rapid adoption of Artificial Intelligence presents a specific challenge: how to apply established governance principles to a non-deterministic technology. This note outlines where AI sits within the standard Three Lines of Defence model, helping you position your skills and oversight responsibilities effectively.

Defining AI Governance

AI Governance is simply the framework of accountability, authority, and control that ensures automated systems are used responsibly. It is the mechanism by which an organisation retains meaningful control over its technology.

In practice, governance ensures that decisions influenced by AI remain lawful, secure, and aligned with the organisation’s agreed risk appetite. It is not about stopping the use of technology, but about ensuring that the organisation understands what the technology is doing.

This governance typically applies across the full lifecycle, broadly split into two phases:

  1. Design and Procurement: Establishing standards and testing frameworks before code is written or software is purchased.
  2. Operational Use: Monitoring systems continuously once they are live and supporting business processes.

When AI systems operate alongside staff and legacy technology, governance ensures there is clear ownership, independent challenge, and accountability for any outcomes derived from the process.

The Case for Oversight

Failures in AI deployment can stem from technical errors, but they arguably arise more frequently from governance deficits. In a professional context, common points of failure include:

  • Orphaned ownership: Uncertainty regarding who owns the model or process once it moves from development to operations.
  • Lack of challenge: An absence of independent validation by stakeholders outside the technical team.
  • Over-reliance: Accepting automated outputs without adequate human oversight or review.
  • The “Black Box” problem: A limited understanding of the technology at the Board or senior management level, leading to ineffective supervision.

It is worth noting that while organisations frequently rely on third parties for AI technology, the regulatory responsibility for its use, and for any harm caused, remains with the deploying organisation. You cannot outsource accountability. Therefore, AI Governance must be integrated with existing disciplines such as cyber security, data privacy, and conduct risk, rather than treated as a siloed specialism.


Related Article: The Six Core Skill Domains of AI Governance


The Three Lines of Defence Model

For governance to operate properly, responsibilities must be distributed clearly. The standard Three Lines of Defence (3LoD) model remains the most effective way to structure this.

First Line: Operational Management

The First line consists of the business units and technical teams that own, build, or procure the AI systems.

  • Role: The primary risk owners.
  • Responsibility: They must identify risks, design and operate controls, and ensure data quality. They remain accountable for the performance and outcomes of the systems they use. If the AI makes a mistake, the First Line owns that mistake.

Second Line: Risk Management and Compliance

These are the oversight functions that operate independently of the First Line.

  • Role: To provide the risk framework and challenge the First Line.
  • Responsibility: To interpret regulatory requirements, facilitate the setting of risk appetite, and validate that First Line controls are actually working. In the context of AI, this includes ensuring that models have been validated before deployment.

Third Line: Internal Audit

The independent assurance function.

  • Role: To independently report directly to the Board Audit Committee.
  • Responsibility: To provide objective assurance that the governance framework is operating effectively. They check that the First and Second lines are talking to each other and that the controls claimed are the controls in practice.

Summary

For professionals working in risk, audit, and compliance, AI is not a separate discipline but an extension of existing responsibilities. It represents a set of technologies and practices that must be studied, understood, and managed like any other business tool.

Organisations with sound governance tend to adopt technology with a clear view of the risks involved. Those without it often find that weaknesses only become apparent after a significant issue has emerged.

For a detailed breakdown of the specific capabilities required, see The Six Core Skill Domains of AI Governance