Towards a Secure Framework for Regulating Artificial Intelligence Systems
Artikel i vetenskaplig tidskrift, 2025
Regulating high-risk artificial intelligence (AI) systems is an urgent issue, yet technical infrastructure for their effective regulation remains scarce. In this paper, we address this gap by identifying key challenges in developing technical frameworks for AI systems' regulation and proposing conceptual, methodological, and practical solutions to address these challenges. In this regard, we introduce the concept of AI's operational qualification and propose the temporal self-replacement test, akin to certification tests for human operators, to examine the AI's operational qualification. We propose measuring AI's operational qualification across its operational properties critical for its regulatory fitness and introduce the operational qualification score as a pragmatic measure of AI's regulatory fitness. In addition, we design and develop a Secure Framework for AI Regulation (SFAIR), a tool for automatic, recurrent, and secure examination of an AI's operational qualification and attestation of its regulatory fitness, leveraging the proposed test and measure. Key strengths of SFAIR include its regulatory focus, flexibility in adapting to evolving regulatory requirements, and conformity to the secure-by-design principle. To achieve this, in addition to the aforementioned, we introduce a novel threat model for AI regulation frameworks. Considering the identified threats, we leverage randomization, masking, encryption-based schemes, and real-time monitoring to secure SFAIR operations. We also leverage AMD's Secure Encrypted Virtualization-Encrypted State (SEV-ES) for enhanced system security. We validate the efficacy of the temporal self-replacement test and the practical utility of SFAIR by demonstrating its capability to support regulatory authorities in automated, recurrent, and secure AI qualification examination and attestation of its regulatory fitness using an open-source, high-risk AI system. Finally, we make the source code of SFAIR publicly available.
trustworthy AI
secure-by-design
High-risk AI
AI testing
qualification testing
AI regulation