Office for AI Expert Workshop: AI life cycle accountability

Activity: Participating in or organising an event typesParticipation in workshop, seminar, course

Description

The UK’s AI regulation white paper was published in March 2023. It proposes an “innovative and iterative” approach to the regulation of AI.

Existing regulators will be expected to implement the framework underpinned by five cross-sectoral principles: (1) safety, security, and robustness (2) appropriate transparency and explainability (3) fairness (4) accountability and governance (5) contestability and redress. These principles are not currently underpinned by any new legal powers or duties, and regulators will be expected to implement them within their existing remits.

This approach can be distinguished from other proposed regulatory frameworks, including the EU’s AI Act which sets out harmonised rules for the development, placing on the market, and use of AI in the European Union (EU).

The UK white paper proposes that legal responsibility for compliance should be allocated to the actors in the AI life cycle best able to identify, assess and mitigate risks effectively. It states that incoherent or misplaced allocation could hinder innovation.

This workshop is one of several events taking place as part of the consultation on the AI regulation white paper. Written responses to the white paper can be submitted until 21st June 2023.

During the workshop, we will address the following main questions:

1. Which actors in the AI life cycle are best placed to mitigate risks?
2. To what extent does the current system allocate accountability to the actors best placed to mitigate risks?
3. Should ‘upstream’ market actors (e.g., developers of foundation models, providers of cloud services, data brokers) bear more liability than they do under current regulatory arrangements?
4. If so, which actors and under what circumstances?
5. Which changes to existing UK law and policy would best improve allocation of accountability throughout the AI life cycle?
6. How can tools for trustworthy AI (such as assurance techniques and technical standards) help address accountability gaps across the AI life cycle? What evidence is there to support their efficacy?
Period1 Jun 2023
Event typeConference