Towards a Secure Framework for Regulating Artificial Intelligence Systems
Artikel i vetenskaplig tidskrift, 2025

Regulating high-risk artificial intelligence (AI) systems is an urgent issue, yet technical infrastructure for their effective regulation remains scarce. In this paper, we address this gap by identifying key challenges in developing technical frameworks for AI systems' regulation and proposing conceptual, methodological, and practical solutions to address these challenges. In this regard, we introduce the concept of AI's operational qualification and propose the temporal self-replacement test, akin to certification tests for human operators, to examine the AI's operational qualification. We propose measuring AI's operational qualification across its operational properties critical for its regulatory fitness and introduce the operational qualification score as a pragmatic measure of AI's regulatory fitness. In addition, we design and develop a Secure Framework for AI Regulation (SFAIR), a tool for automatic, recurrent, and secure examination of an AI's operational qualification and attestation of its regulatory fitness, leveraging the proposed test and measure. Key strengths of SFAIR include its regulatory focus, flexibility in adapting to evolving regulatory requirements, and conformity to the secure-by-design principle. To achieve this, in addition to the aforementioned, we introduce a novel threat model for AI regulation frameworks. Considering the identified threats, we leverage randomization, masking, encryption-based schemes, and real-time monitoring to secure SFAIR operations. We also leverage AMD's Secure Encrypted Virtualization-Encrypted State (SEV-ES) for enhanced system security. We validate the efficacy of the temporal self-replacement test and the practical utility of SFAIR by demonstrating its capability to support regulatory authorities in automated, recurrent, and secure AI qualification examination and attestation of its regulatory fitness using an open-source, high-risk AI system. Finally, we make the source code of SFAIR publicly available.

trustworthy AI

secure-by-design

High-risk AI

AI testing

qualification testing

AI regulation

Författare

Haroon Elahi

Chalmers, Data- och informationsteknik, Formella metoder

Göteborgs universitet

Nian Liu

Southern University of Science and Technology

Jiatong Chen

Southern University of Science and Technology

Fengwei Zhang

Southern University of Science and Technology

IEEE Transactions on Dependable and Secure Computing

1545-5971 (ISSN) 19410018 (eISSN)

Vol. In Press

Ämneskategorier (SSIF 2025)

Datavetenskap (datalogi)

DOI

10.1109/TDSC.2025.3616288

Mer information

Senast uppdaterat

2025-10-13