LATEST NEWS

Clarifying and Strengthening the Regulation of Medical Device Software Including Artificial Intelligence (AI)

Clarifying and Strengthening the Regulation of Medical Device Software Including Artificial Intelligence (AI)

December 10, 20255 min read

In mid-2025, the national regulator released a comprehensive review addressing the regulation of medical device software, with particular emphasis on software as a medical device (SaMD) and artificial intelligence/machine learning (AI/ML) applications. The review was prompted by rapid technological change, the expanding use of cloud and platform distribution models, and increasing stakeholder concern about safety, transparency, and accountability in AI-driven clinical systems. This summary provides a structured analysis of the review’s scope, methodology, principal findings, and practical implications for sponsors, developers, importers, and quality-systems professionals. The tone and structure align with Quality Systems Now’s (QSN) approach to scientific regulatory reporting.

Scope and Methodology

The review examined the adequacy of the existing regulatory framework for medical device software and AI-enabled systems, testing whether current legislation, classification rules, and guidance remain fit for purpose. It specifically addressed:

  • Definitions and terminology used in the regulatory framework (for example, manufacturer, sponsor, supply, distributor) and whether they map to contemporary software/AI lifecycle roles (developer, deployer, cloud host, integrator).

  • Distribution and supply pathways for software delivered digitally or via cloud services.

  • Allocation of responsibility and accountability across multi-party development and deployment models.

  • The suitability of classification rules and essential principles for AI-enabled devices.

  • The need for supplementary guidance on issues such as adaptive algorithms, dataset provenance, post-market surveillance, transparency, and change-control.

The review used stakeholder consultation, technical reference groups, workshops and written submissions to gather evidence across industry, clinical, consumer and developer communities. The analysis included mapping existing regulatory controls against typical AI/ML deployment scenarios and lifecycle change patterns.

Strengths of the Current Framework

The review concluded that the existing regulatory model retains key strengths when applied to medical device software:

  • A technology-agnostic, risk-based classification approach provides adaptable oversight across diverse device types and software functions.

  • The set of fundamental safety and performance principles remains an appropriate foundation for assessing SaMD and AI-enabled systems.

  • Established regulatory processes and pre-market/micro-assessment capabilities allow for targeted scrutiny of higher-risk devices while enabling lower-risk innovation pathways.

Stakeholders expressed broad support for the flexible, outcomes-focused philosophy underpinning the framework, noting that it avoids premature, prescriptive constraints on rapidly evolving technologies.

Principal Findings and Recommended Clarifications

While the core framework was found sound, the review identified multiple areas where clarification, targeted guidance, or administrative adjustment is required rather than wholesale legislative overhaul. The principal findings are summarized below.

Definitions, Roles and Accountability

Traditional regulatory definitions—developed for physical medical devices and linear supply chains—are less precise when applied to software and AI ecosystems. The review identified ambiguity around which party constitutes the “manufacturer” or “sponsor” where development, deployment, and maintenance are performed by different organisations or when third-party cloud hosts participate. The recommendation is to clarify how existing legal roles map to software lifecycle roles, either through statutory revision or definitive guidance, to ensure responsibilities are identifiable and enforceable.

Digital Supply and Distribution

Software distribution commonly occurs via downloads, app stores, cloud platforms or software-as-a-service (SaaS). The review notes that current notions of “supply” and distribution chains may not explicitly cover these models, creating potential oversight gaps. The regulator proposes clarifications so that digital delivery and remote access channels are captured within supply-chain and traceability requirements.

Assigning Responsibility for AI-driven Outcomes

AI systems can autonomously produce outputs with direct clinical impact. The review raised concerns about how responsibility for adverse outcomes would be assigned where multiple parties contribute to model training, deployment, inference, updates, or hosting. The review recommends clearer assignment of accountability for deployment, maintenance, and change control, so regulatory obligations can be applied effectively across complex operational arrangements.

Classification and Essential Principles

The review concluded that no new AI-only classification regime is currently necessary. Existing classification rules and essential safety/performance principles provide an adequate basis when applied through a risk-based lens focusing on intended use and clinical impact. This preserves regulatory flexibility while ensuring higher-risk AI applications remain subject to appropriate scrutiny.

Guidance Needs: Adaptive AI, Data Provenance, and Post-market Surveillance

Rather than legislative change, the review emphasised the need for detailed guidance in several technical areas:

  • Change control and versioning for adaptive AI: Guidance is required to define what constitutes a “substantial change,” when re-assessment is needed, and how retraining/versioning should be documented and validated.

  • Data provenance and dataset quality: Sponsors should be given clear expectations for documenting training datasets, including provenance, preprocessing, bias assessment, and lineage for reproducibility and audit.

  • Post-market performance monitoring: Robust frameworks for real-world performance monitoring, drift detection, logging, reporting and revalidation are necessary for sustained safety of AI models.

  • Transparency and claims management: Clear rules for user information, limitations, intended use statements, and advertising claims are required to prevent inadvertent over-claiming of AI capabilities.

  • Cloud and cross-jurisdictional deployment: Practical guidance for managing integrity, version control, security, and data privacy across cloud hosts and overseas servers is needed.

Practical Implications for Stakeholders

The review has direct consequences for quality systems, regulatory strategy, and contractual governance. Key actions for organisations include:

  • Review and update contracts and governance to reflect clarified roles and responsibilities across development, deployment, and hosting partners.

  • Implement or enhance change-control, versioning and validation procedures specifically tailored to adaptive AI models.

  • Establish data governance structures to record dataset provenance, preprocessing, and model training histories.

  • Strengthen post-market surveillance capabilities to capture real-world performance metrics and enable timely corrective actions.

  • Ensure transparency in labeling, user instructions, and promotional materials to accurately reflect device capabilities and limitations.

Quality-systems consultancies can support these activities by creating compliant documentation, performing gap analyses, designing monitoring programs, and delivering training aligned with the regulator’s clarified expectations.

Conclusion

The regulator’s mid-2025 review acknowledges that the existing, technology-agnostic, risk-based framework remains fundamentally appropriate for SaMD and AI-enabled medical devices. However, the review identifies important practical gaps in definitions, digital supply oversight, accountability and lifecycle management for adaptive systems. The recommended path is enhanced, targeted guidance and administrative clarifications rather than widespread legislative reform. For industry stakeholders, this translates into immediate priorities: clarifying internal and contractual responsibilities, strengthening data and change-control governance, implementing robust post-market monitoring, and ensuring transparent communication of device capabilities. These measures will help organisations meet the clarified regulatory expectations while continuing to innovate in the domain of medical device software and AI.

AI medical
Back to Blog